* [PATCH V8 1/6] perf: Save PMU specific data in task_struct
@ 2025-03-12 18:25 kan.liang
2025-03-12 18:25 ` [PATCH V8 2/6] perf: attach/detach PMU specific data kan.liang
` (5 more replies)
0 siblings, 6 replies; 11+ messages in thread
From: kan.liang @ 2025-03-12 18:25 UTC (permalink / raw)
To: peterz, mingo, tglx, bp, acme, namhyung, irogers, linux-kernel
Cc: ak, eranian, Kan Liang
From: Kan Liang <kan.liang@linux.intel.com>
Some PMU specific data has to be saved/restored during context switch,
e.g. LBR call stack data. Currently, the data is saved in event context
structure, but only for per-process event. For system-wide event,
because of missing the LBR call stack data after context switch, LBR
callstacks are always shorter in comparison to per-process mode.
For example,
Per-process mode:
$perf record --call-graph lbr -- taskset -c 0 ./tchain_edit
- 99.90% 99.86% tchain_edit tchain_edit [.] f3
99.86% _start
__libc_start_main
generic_start_main
main
f1
- f2
f3
System-wide mode:
$perf record --call-graph lbr -a -- taskset -c 0 ./tchain_edit
- 99.88% 99.82% tchain_edit tchain_edit [.] f3
- 62.02% main
f1
f2
f3
- 28.83% f1
- f2
f3
- 28.83% f1
- f2
f3
- 8.88% generic_start_main
main
f1
f2
f3
It isn't practical to simply allocate the data for system-wide event in
CPU context structure for all tasks. We have no idea which CPU a task
will be scheduled to. The duplicated LBR data has to be maintained on
every CPU context structure. That's a huge waste. Otherwise, the LBR
data still lost if the task is scheduled to another CPU.
Save the pmu specific data in task_struct. The size of pmu specific data
is 788 bytes for LBR call stack. Usually, the overall amount of threads
doesn't exceed a few thousands. For 10K threads, keeping LBR data would
consume additional ~8MB. The additional space will only be allocated
during LBR call stack monitoring. It will be released when the
monitoring is finished.
Furthermore, moving task_ctx_data from perf_event_context to task_struct
can reduce complexity and make things clearer. E.g. perf doesn't need to
swap task_ctx_data on optimized context switch path.
This patch set is just the first step. There could be other
optimization/extension on top of this patch set. E.g. for cgroup
profiling, perf just needs to save/store the LBR call stack information
for tasks in specific cgroup. That could reduce the additional space.
Also, the LBR call stack can be available for software events, or allow
even debugging use cases, like LBRs on crash later.
The Kmem cache of pmu specific data is saved in struct perf_ctx_data.
It's required when child task allocates the space.
The refcount in struct perf_ctx_data is used to track the users of pmu
specific data.
Reviewed-by: Alexey Budankov <alexey.budankov@linux.intel.com>
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
---
The whole patch set was posted several years ago. But it's buried in the
LKML without merging. I've received several requests recently to fix the
LBR issue with system-wide events. Rebase and repost it.
- Rebase on top of Peter's perf/core branch.
commit 347b40fa96a1 ("perf: Extend per event callchain limit to branch stack")
The V6 can be found here.
https://lore.kernel.org/lkml/1626788420-121610-1-git-send-email-kan.liang@linux.intel.com/
include/linux/perf_event.h | 30 ++++++++++++++++++++++++++++++
include/linux/sched.h | 2 ++
kernel/events/core.c | 1 +
3 files changed, 33 insertions(+)
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index 3e270822b915..b8442047a2b6 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -1021,6 +1021,36 @@ struct perf_event_context {
local_t nr_no_switch_fast;
};
+/**
+ * struct perf_ctx_data - PMU specific data for a task
+ * @rcu_head: To avoid the race on free PMU specific data
+ * @refcount: To track users
+ * @global: To track system-wide users
+ * @ctx_cache: Kmem cache of PMU specific data
+ * @data: PMU specific data
+ *
+ * Currently, the struct is only used in Intel LBR call stack mode to
+ * save/restore the call stack of a task on context switches.
+ * The data only be allocated when Intel LBR call stack mode is enabled.
+ * The data will be freed when the mode is disabled. The rcu_head is
+ * used to prevent the race on free the data.
+ * The content of the data will only be accessed in context switch, which
+ * should be protected by rcu_read_lock().
+ *
+ * Careful: Struct perf_ctx_data is added as a pointor in struct task_struct.
+ * When system-wide Intel LBR call stack mode is enabled, a buffer with
+ * constant size will be allocated for each task.
+ * Also, system memory consumption can further grow when the size of
+ * struct perf_ctx_data enlarges.
+ */
+struct perf_ctx_data {
+ struct rcu_head rcu_head;
+ refcount_t refcount;
+ int global;
+ struct kmem_cache *ctx_cache;
+ void *data;
+};
+
struct perf_cpu_pmu_context {
struct perf_event_pmu_context epc;
struct perf_event_pmu_context *task_epc;
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 9632e3318e0d..7e183eeb50ec 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -65,6 +65,7 @@ struct mempolicy;
struct nameidata;
struct nsproxy;
struct perf_event_context;
+struct perf_ctx_data;
struct pid_namespace;
struct pipe_inode_info;
struct rcu_node;
@@ -1311,6 +1312,7 @@ struct task_struct {
struct perf_event_context *perf_event_ctxp;
struct mutex perf_event_mutex;
struct list_head perf_event_list;
+ struct perf_ctx_data __rcu *perf_ctx_data;
#endif
#ifdef CONFIG_DEBUG_PREEMPT
unsigned long preempt_disable_ip;
diff --git a/kernel/events/core.c b/kernel/events/core.c
index e7d0b055f96c..2e5f0a204484 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -14061,6 +14061,7 @@ int perf_event_init_task(struct task_struct *child, u64 clone_flags)
child->perf_event_ctxp = NULL;
mutex_init(&child->perf_event_mutex);
INIT_LIST_HEAD(&child->perf_event_list);
+ child->perf_ctx_data = NULL;
ret = perf_event_init_context(child, clone_flags);
if (ret) {
--
2.38.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH V8 2/6] perf: attach/detach PMU specific data
2025-03-12 18:25 [PATCH V8 1/6] perf: Save PMU specific data in task_struct kan.liang
@ 2025-03-12 18:25 ` kan.liang
2025-03-12 19:18 ` Peter Zijlstra
2025-03-12 18:25 ` [PATCH V8 3/6] perf: Supply task information to sched_task() kan.liang
` (4 subsequent siblings)
5 siblings, 1 reply; 11+ messages in thread
From: kan.liang @ 2025-03-12 18:25 UTC (permalink / raw)
To: peterz, mingo, tglx, bp, acme, namhyung, irogers, linux-kernel
Cc: ak, eranian, Kan Liang
From: Kan Liang <kan.liang@linux.intel.com>
The LBR call stack data has to be saved/restored during context switch
to fix the shorter LBRs call stacks issue in the system-wide mode.
Allocate PMU specific data and attach them to the corresponding
task_struct during LBR call stack monitoring.
When a LBR call stack event is accounted, the perf_ctx_data for the
related tasks will be allocated/attached by attach_perf_ctx_data().
When a LBR call stack event is unaccounted, the perf_ctx_data for
related tasks will be detached/freed by detach_perf_ctx_data().
The LBR call stack event could be a per-task event or a system-wide
event.
- For a per-task event, perf only allocates the perf_ctx_data for the
current task. If the allocation fails, perf will error out.
- For a system-wide event, perf has to allocate the perf_ctx_data for
both the existing tasks and the upcoming tasks.
The allocation for the existing tasks is done in perf_event_alloc().
If any allocation fails, perf will error out.
The allocation for the new tasks will be done in perf_event_fork().
A global reader/writer semaphore, global_ctx_data_rwsem, is added to
address the global race.
- The perf_ctx_data only be freed by the last LBR call stack event.
The number of the per-task events is tracked by refcount of each task.
Since the system-wide events impact all tasks, it's not practical to
go through the whole task list to update the refcount for each
system-wide event. The number of system-wide events is tracked by a
global variable global_ctx_data_ref.
Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
---
kernel/events/core.c | 287 +++++++++++++++++++++++++++++++++++++++++++
1 file changed, 287 insertions(+)
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 2e5f0a204484..4336cf26fe35 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -55,6 +55,7 @@
#include <linux/pgtable.h>
#include <linux/buildid.h>
#include <linux/task_work.h>
+#include <linux/percpu-rwsem.h>
#include "internal.h"
@@ -5212,6 +5213,222 @@ static void unaccount_freq_event(void)
atomic_dec(&nr_freq_events);
}
+
+static struct perf_ctx_data *
+alloc_perf_ctx_data(struct kmem_cache *ctx_cache, bool global)
+{
+ struct perf_ctx_data *cd;
+
+ cd = kzalloc(sizeof(*cd), GFP_KERNEL);
+ if (!cd)
+ return NULL;
+
+ cd->data = kmem_cache_zalloc(ctx_cache, GFP_KERNEL);
+ if (!cd->data) {
+ kfree(cd);
+ return NULL;
+ }
+
+ cd->global = global;
+ cd->ctx_cache = ctx_cache;
+ refcount_set(&cd->refcount, 1);
+
+ return cd;
+}
+
+static void free_perf_ctx_data(struct perf_ctx_data *cd)
+{
+ kmem_cache_free(cd->ctx_cache, cd->data);
+ kfree(cd);
+}
+
+static void __free_perf_ctx_data_rcu(struct rcu_head *rcu_head)
+{
+ struct perf_ctx_data *cd;
+
+ cd = container_of(rcu_head, struct perf_ctx_data, rcu_head);
+ free_perf_ctx_data(cd);
+}
+
+static inline void perf_free_ctx_data_rcu(struct perf_ctx_data *cd)
+{
+ call_rcu(&cd->rcu_head, __free_perf_ctx_data_rcu);
+}
+
+static int
+attach_task_ctx_data(struct task_struct *task, struct kmem_cache *ctx_cache,
+ bool global)
+{
+ struct perf_ctx_data *cd, *old = NULL;
+
+ cd = alloc_perf_ctx_data(ctx_cache, global);
+ if (!cd)
+ return -ENOMEM;
+
+ for (;;) {
+ if (try_cmpxchg((struct perf_ctx_data **)&task->perf_ctx_data, &old, cd)) {
+ if (old)
+ perf_free_ctx_data_rcu(old);
+ return 0;
+ }
+
+ if (!old) {
+ /*
+ * After seeing a dead @old, we raced with
+ * removal and lost, try again to install @cd.
+ */
+ continue;
+ }
+
+ if (refcount_inc_not_zero(&old->refcount)) {
+ free_perf_ctx_data(cd); /* unused */
+ return 0;
+ }
+
+ /*
+ * @old is a dead object, refcount==0 is stable, try and
+ * replace it with @cd.
+ */
+ }
+ return 0;
+}
+
+static void __detach_global_ctx_data(void);
+DEFINE_STATIC_PERCPU_RWSEM(global_ctx_data_rwsem);
+static refcount_t global_ctx_data_ref;
+
+static int
+attach_global_ctx_data(struct kmem_cache *ctx_cache)
+{
+ if (refcount_inc_not_zero(&global_ctx_data_ref))
+ return 0;
+
+ percpu_down_write(&global_ctx_data_rwsem);
+ if (!refcount_inc_not_zero(&global_ctx_data_ref)) {
+ struct task_struct *g, *p;
+ struct perf_ctx_data *cd;
+ int ret;
+
+again:
+ /* Allocate everything */
+ rcu_read_lock();
+ for_each_process_thread(g, p) {
+ cd = rcu_dereference(p->perf_ctx_data);
+ if (cd && !cd->global) {
+ cd->global = 1;
+ if (!refcount_inc_not_zero(&cd->refcount))
+ cd = NULL;
+ }
+ if (!cd) {
+ get_task_struct(p);
+ rcu_read_unlock();
+
+ ret = attach_task_ctx_data(p, ctx_cache, true);
+ put_task_struct(p);
+ if (ret) {
+ __detach_global_ctx_data();
+ return ret;
+ }
+ goto again;
+ }
+ }
+ rcu_read_unlock();
+
+ refcount_set(&global_ctx_data_ref, 1);
+ }
+ percpu_up_write(&global_ctx_data_rwsem);
+
+ return 0;
+}
+
+static int
+attach_perf_ctx_data(struct perf_event *event)
+{
+ struct task_struct *task = event->hw.target;
+ struct kmem_cache *ctx_cache = event->pmu->task_ctx_cache;
+
+ if (!ctx_cache)
+ return -ENOMEM;
+
+ if (task)
+ return attach_task_ctx_data(task, ctx_cache, false);
+ else
+ return attach_global_ctx_data(ctx_cache);
+}
+
+static void
+detach_task_ctx_data(struct task_struct *p)
+{
+ struct perf_ctx_data *cd;
+
+ rcu_read_lock();
+ cd = rcu_dereference(p->perf_ctx_data);
+ if (!cd || !refcount_dec_and_test(&cd->refcount)) {
+ rcu_read_unlock();
+ return;
+ }
+ rcu_read_unlock();
+
+ /*
+ * The old ctx_data may be lost because of the race.
+ * Nothing is required to do for the case.
+ * See attach_task_ctx_data().
+ */
+ if (try_cmpxchg((struct perf_ctx_data **)&p->perf_ctx_data, &cd, NULL))
+ perf_free_ctx_data_rcu(cd);
+}
+
+static void __detach_global_ctx_data(void)
+{
+ struct task_struct *g, *p;
+ struct perf_ctx_data *cd;
+
+again:
+ rcu_read_lock();
+ for_each_process_thread(g, p) {
+ cd = rcu_dereference(p->perf_ctx_data);
+ if (!cd || !cd->global)
+ continue;
+ cd->global = 0;
+ get_task_struct(p);
+ rcu_read_unlock();
+
+ detach_task_ctx_data(p);
+ put_task_struct(p);
+ goto again;
+ }
+ rcu_read_unlock();
+}
+
+static void detach_global_ctx_data(void)
+{
+ if (refcount_dec_not_one(&global_ctx_data_ref))
+ return;
+
+ percpu_down_write(&global_ctx_data_rwsem);
+ if (!refcount_dec_and_test(&global_ctx_data_ref))
+ goto unlock;
+
+ /* remove everything */
+ __detach_global_ctx_data();
+
+unlock:
+ percpu_up_write(&global_ctx_data_rwsem);
+}
+
+static void detach_perf_ctx_data(struct perf_event *event)
+{
+ struct task_struct *task = event->hw.target;
+
+ if (!event->pmu->task_ctx_cache)
+ return;
+
+ if (task)
+ detach_task_ctx_data(task);
+ else
+ detach_global_ctx_data();
+}
+
static void unaccount_event(struct perf_event *event)
{
bool dec = false;
@@ -5249,6 +5466,8 @@ static void unaccount_event(struct perf_event *event)
atomic_dec(&nr_bpf_events);
if (event->attr.text_poke)
atomic_dec(&nr_text_poke_events);
+ if (event->attach_state & PERF_ATTACH_TASK_DATA)
+ detach_perf_ctx_data(event);
if (dec) {
if (!atomic_add_unless(&perf_sched_count, -1, 1))
@@ -5382,6 +5601,9 @@ static void perf_pending_task_sync(struct perf_event *event)
/* vs perf_event_alloc() error */
static void __free_event(struct perf_event *event)
{
+ if (event->security)
+ security_perf_event_free(event);
+
if (event->attach_state & PERF_ATTACH_CALLCHAIN)
put_callchain_buffers();
@@ -8598,10 +8820,62 @@ static void perf_event_task(struct task_struct *task,
task_ctx);
}
+/*
+ * Allocate data for a new task when profiling system-wide
+ * events which require PMU specific data
+ */
+static void
+perf_event_alloc_task_data(struct task_struct *child,
+ struct task_struct *parent)
+{
+ struct kmem_cache *ctx_cache = NULL;
+ struct perf_ctx_data *cd;
+
+ if (!refcount_read(&global_ctx_data_ref))
+ return;
+
+ rcu_read_lock();
+ cd = rcu_dereference(parent->perf_ctx_data);
+ if (cd)
+ ctx_cache = cd->ctx_cache;
+ rcu_read_unlock();
+
+ if (!ctx_cache)
+ return;
+
+ percpu_down_read(&global_ctx_data_rwsem);
+
+ rcu_read_lock();
+ cd = rcu_dereference(child->perf_ctx_data);
+
+ if (!cd) {
+ /*
+ * A system-wide event may be unaccount,
+ * when attaching the perf_ctx_data.
+ */
+ if (!refcount_read(&global_ctx_data_ref))
+ goto rcu_unlock;
+ rcu_read_unlock();
+ attach_task_ctx_data(child, ctx_cache, true);
+ goto up_rwsem;
+ }
+
+ if (!cd->global) {
+ cd->global = 1;
+ refcount_inc(&cd->refcount);
+ }
+
+rcu_unlock:
+ rcu_read_unlock();
+up_rwsem:
+ percpu_up_read(&global_ctx_data_rwsem);
+}
+
void perf_event_fork(struct task_struct *task)
{
perf_event_task(task, NULL, 1);
perf_event_namespaces(task);
+ perf_event_alloc_task_data(task, current);
}
/*
@@ -12551,6 +12825,12 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu,
if (err)
return ERR_PTR(err);
+ if (event->attach_state & PERF_ATTACH_TASK_DATA) {
+ err = attach_perf_ctx_data(event);
+ if (err)
+ return ERR_PTR(err);
+ }
+
/* symmetric to unaccount_event() in _free_event() */
account_event(event);
@@ -13628,6 +13908,13 @@ void perf_event_exit_task(struct task_struct *child)
* At this point we need to send EXIT events to cpu contexts.
*/
perf_event_task(child, NULL, 0);
+
+ /*
+ * Detach the perf_ctx_data for the system-wide event.
+ */
+ percpu_down_read(&global_ctx_data_rwsem);
+ detach_task_ctx_data(child);
+ percpu_up_read(&global_ctx_data_rwsem);
}
static void perf_free_event(struct perf_event *event,
--
2.38.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH V8 3/6] perf: Supply task information to sched_task()
2025-03-12 18:25 [PATCH V8 1/6] perf: Save PMU specific data in task_struct kan.liang
2025-03-12 18:25 ` [PATCH V8 2/6] perf: attach/detach PMU specific data kan.liang
@ 2025-03-12 18:25 ` kan.liang
2025-03-12 18:25 ` [PATCH V8 4/6] perf/x86/lbr: Fix shorter LBRs call stacks for the system-wide mode kan.liang
` (3 subsequent siblings)
5 siblings, 0 replies; 11+ messages in thread
From: kan.liang @ 2025-03-12 18:25 UTC (permalink / raw)
To: peterz, mingo, tglx, bp, acme, namhyung, irogers, linux-kernel
Cc: ak, eranian, Kan Liang
From: Kan Liang <kan.liang@linux.intel.com>
To save/restore LBR call stack data in system-wide mode, the task_struct
information is required.
Extend the parameters of sched_task() to supply task_struct information.
When schedule in, the LBR call stack data for new task will be restored.
When schedule out, the LBR call stack data for old task will be saved.
Only need to pass the required task_struct information.
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
---
arch/powerpc/perf/core-book3s.c | 8 ++++++--
arch/x86/events/amd/brs.c | 3 ++-
arch/x86/events/amd/lbr.c | 3 ++-
arch/x86/events/core.c | 5 +++--
arch/x86/events/intel/core.c | 4 ++--
arch/x86/events/intel/lbr.c | 3 ++-
arch/x86/events/perf_event.h | 14 +++++++++-----
include/linux/perf_event.h | 2 +-
kernel/events/core.c | 20 +++++++++++---------
9 files changed, 38 insertions(+), 24 deletions(-)
diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
index 2b79171ee185..f4e03aaabb4c 100644
--- a/arch/powerpc/perf/core-book3s.c
+++ b/arch/powerpc/perf/core-book3s.c
@@ -132,7 +132,10 @@ static unsigned long ebb_switch_in(bool ebb, struct cpu_hw_events *cpuhw)
static inline void power_pmu_bhrb_enable(struct perf_event *event) {}
static inline void power_pmu_bhrb_disable(struct perf_event *event) {}
-static void power_pmu_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sched_in) {}
+static void power_pmu_sched_task(struct perf_event_pmu_context *pmu_ctx,
+ struct task_struct *task, bool sched_in)
+{
+}
static inline void power_pmu_bhrb_read(struct perf_event *event, struct cpu_hw_events *cpuhw) {}
static void pmao_restore_workaround(bool ebb) { }
#endif /* CONFIG_PPC32 */
@@ -444,7 +447,8 @@ static void power_pmu_bhrb_disable(struct perf_event *event)
/* Called from ctxsw to prevent one process's branch entries to
* mingle with the other process's entries during context switch.
*/
-static void power_pmu_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sched_in)
+static void power_pmu_sched_task(struct perf_event_pmu_context *pmu_ctx,
+ struct task_struct *task, bool sched_in)
{
if (!ppmu->bhrb_nr)
return;
diff --git a/arch/x86/events/amd/brs.c b/arch/x86/events/amd/brs.c
index 780acd3dff22..ec3427463382 100644
--- a/arch/x86/events/amd/brs.c
+++ b/arch/x86/events/amd/brs.c
@@ -381,7 +381,8 @@ static void amd_brs_poison_buffer(void)
* On ctxswin, sched_in = true, called after the PMU has started
* On ctxswout, sched_in = false, called before the PMU is stopped
*/
-void amd_pmu_brs_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sched_in)
+void amd_pmu_brs_sched_task(struct perf_event_pmu_context *pmu_ctx,
+ struct task_struct *task, bool sched_in)
{
struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
diff --git a/arch/x86/events/amd/lbr.c b/arch/x86/events/amd/lbr.c
index 19c7b76e21bc..c06ccca96851 100644
--- a/arch/x86/events/amd/lbr.c
+++ b/arch/x86/events/amd/lbr.c
@@ -371,7 +371,8 @@ void amd_pmu_lbr_del(struct perf_event *event)
perf_sched_cb_dec(event->pmu);
}
-void amd_pmu_lbr_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sched_in)
+void amd_pmu_lbr_sched_task(struct perf_event_pmu_context *pmu_ctx,
+ struct task_struct *task, bool sched_in)
{
struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index 20ad5cca6ad2..ae8c90adca0f 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -2638,9 +2638,10 @@ static const struct attribute_group *x86_pmu_attr_groups[] = {
NULL,
};
-static void x86_pmu_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sched_in)
+static void x86_pmu_sched_task(struct perf_event_pmu_context *pmu_ctx,
+ struct task_struct *task, bool sched_in)
{
- static_call_cond(x86_pmu_sched_task)(pmu_ctx, sched_in);
+ static_call_cond(x86_pmu_sched_task)(pmu_ctx, task, sched_in);
}
static void x86_pmu_swap_task_ctx(struct perf_event_pmu_context *prev_epc,
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 5a8d6e1a9334..3efbb03fd77e 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -5294,10 +5294,10 @@ static void intel_pmu_cpu_dead(int cpu)
}
static void intel_pmu_sched_task(struct perf_event_pmu_context *pmu_ctx,
- bool sched_in)
+ struct task_struct *task, bool sched_in)
{
intel_pmu_pebs_sched_task(pmu_ctx, sched_in);
- intel_pmu_lbr_sched_task(pmu_ctx, sched_in);
+ intel_pmu_lbr_sched_task(pmu_ctx, task, sched_in);
}
static void intel_pmu_swap_task_ctx(struct perf_event_pmu_context *prev_epc,
diff --git a/arch/x86/events/intel/lbr.c b/arch/x86/events/intel/lbr.c
index dc641b50814e..dafeee216f3b 100644
--- a/arch/x86/events/intel/lbr.c
+++ b/arch/x86/events/intel/lbr.c
@@ -539,7 +539,8 @@ void intel_pmu_lbr_swap_task_ctx(struct perf_event_pmu_context *prev_epc,
task_context_opt(next_ctx_data)->lbr_callstack_users);
}
-void intel_pmu_lbr_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sched_in)
+void intel_pmu_lbr_sched_task(struct perf_event_pmu_context *pmu_ctx,
+ struct task_struct *task, bool sched_in)
{
struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
void *task_ctx;
diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
index a698e6484b3b..0d5019fb3ad2 100644
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -875,7 +875,7 @@ struct x86_pmu {
void (*check_microcode)(void);
void (*sched_task)(struct perf_event_pmu_context *pmu_ctx,
- bool sched_in);
+ struct task_struct *task, bool sched_in);
/*
* Intel Arch Perfmon v2+
@@ -1408,7 +1408,8 @@ void amd_pmu_lbr_reset(void);
void amd_pmu_lbr_read(void);
void amd_pmu_lbr_add(struct perf_event *event);
void amd_pmu_lbr_del(struct perf_event *event);
-void amd_pmu_lbr_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sched_in);
+void amd_pmu_lbr_sched_task(struct perf_event_pmu_context *pmu_ctx,
+ struct task_struct *task, bool sched_in);
void amd_pmu_lbr_enable_all(void);
void amd_pmu_lbr_disable_all(void);
int amd_pmu_lbr_hw_config(struct perf_event *event);
@@ -1462,7 +1463,8 @@ static inline void amd_pmu_brs_del(struct perf_event *event)
perf_sched_cb_dec(event->pmu);
}
-void amd_pmu_brs_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sched_in);
+void amd_pmu_brs_sched_task(struct perf_event_pmu_context *pmu_ctx,
+ struct task_struct *task, bool sched_in);
#else
static inline int amd_brs_init(void)
{
@@ -1487,7 +1489,8 @@ static inline void amd_pmu_brs_del(struct perf_event *event)
{
}
-static inline void amd_pmu_brs_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sched_in)
+static inline void amd_pmu_brs_sched_task(struct perf_event_pmu_context *pmu_ctx,
+ struct task_struct *task, bool sched_in)
{
}
@@ -1670,7 +1673,8 @@ void intel_pmu_lbr_save_brstack(struct perf_sample_data *data,
void intel_pmu_lbr_swap_task_ctx(struct perf_event_pmu_context *prev_epc,
struct perf_event_pmu_context *next_epc);
-void intel_pmu_lbr_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sched_in);
+void intel_pmu_lbr_sched_task(struct perf_event_pmu_context *pmu_ctx,
+ struct task_struct *task, bool sched_in);
u64 lbr_from_signext_quirk_wr(u64 val);
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index b8442047a2b6..beb6799d80d0 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -494,7 +494,7 @@ struct pmu {
* context-switches callback
*/
void (*sched_task) (struct perf_event_pmu_context *pmu_ctx,
- bool sched_in);
+ struct task_struct *task, bool sched_in);
/*
* Kmem cache of PMU specific data
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 4336cf26fe35..7b31ae194a08 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -3613,7 +3613,8 @@ static void perf_event_swap_task_ctx_data(struct perf_event_context *prev_ctx,
}
}
-static void perf_ctx_sched_task_cb(struct perf_event_context *ctx, bool sched_in)
+static void perf_ctx_sched_task_cb(struct perf_event_context *ctx,
+ struct task_struct *task, bool sched_in)
{
struct perf_event_pmu_context *pmu_ctx;
struct perf_cpu_pmu_context *cpc;
@@ -3622,7 +3623,7 @@ static void perf_ctx_sched_task_cb(struct perf_event_context *ctx, bool sched_in
cpc = this_cpc(pmu_ctx->pmu);
if (cpc->sched_cb_usage && pmu_ctx->pmu->sched_task)
- pmu_ctx->pmu->sched_task(pmu_ctx, sched_in);
+ pmu_ctx->pmu->sched_task(pmu_ctx, task, sched_in);
}
}
@@ -3685,7 +3686,7 @@ perf_event_context_sched_out(struct task_struct *task, struct task_struct *next)
WRITE_ONCE(ctx->task, next);
WRITE_ONCE(next_ctx->task, task);
- perf_ctx_sched_task_cb(ctx, false);
+ perf_ctx_sched_task_cb(ctx, task, false);
perf_event_swap_task_ctx_data(ctx, next_ctx);
perf_ctx_enable(ctx, false);
@@ -3715,7 +3716,7 @@ perf_event_context_sched_out(struct task_struct *task, struct task_struct *next)
perf_ctx_disable(ctx, false);
inside_switch:
- perf_ctx_sched_task_cb(ctx, false);
+ perf_ctx_sched_task_cb(ctx, task, false);
task_ctx_sched_out(ctx, NULL, EVENT_ALL);
perf_ctx_enable(ctx, false);
@@ -3757,7 +3758,8 @@ void perf_sched_cb_inc(struct pmu *pmu)
* PEBS requires this to provide PID/TID information. This requires we flush
* all queued PEBS records before we context switch to a new task.
*/
-static void __perf_pmu_sched_task(struct perf_cpu_pmu_context *cpc, bool sched_in)
+static void __perf_pmu_sched_task(struct perf_cpu_pmu_context *cpc,
+ struct task_struct *task, bool sched_in)
{
struct perf_cpu_context *cpuctx = this_cpu_ptr(&perf_cpu_context);
struct pmu *pmu;
@@ -3771,7 +3773,7 @@ static void __perf_pmu_sched_task(struct perf_cpu_pmu_context *cpc, bool sched_i
perf_ctx_lock(cpuctx, cpuctx->task_ctx);
perf_pmu_disable(pmu);
- pmu->sched_task(cpc->task_epc, sched_in);
+ pmu->sched_task(cpc->task_epc, task, sched_in);
perf_pmu_enable(pmu);
perf_ctx_unlock(cpuctx, cpuctx->task_ctx);
@@ -3789,7 +3791,7 @@ static void perf_pmu_sched_task(struct task_struct *prev,
return;
list_for_each_entry(cpc, this_cpu_ptr(&sched_cb_list), sched_cb_entry)
- __perf_pmu_sched_task(cpc, sched_in);
+ __perf_pmu_sched_task(cpc, sched_in ? next : prev, sched_in);
}
static void perf_event_switch(struct task_struct *task,
@@ -4083,7 +4085,7 @@ static void perf_event_context_sched_in(struct task_struct *task)
perf_ctx_lock(cpuctx, ctx);
perf_ctx_disable(ctx, false);
- perf_ctx_sched_task_cb(ctx, true);
+ perf_ctx_sched_task_cb(ctx, task, true);
perf_ctx_enable(ctx, false);
perf_ctx_unlock(cpuctx, ctx);
@@ -4114,7 +4116,7 @@ static void perf_event_context_sched_in(struct task_struct *task)
perf_event_sched_in(cpuctx, ctx, NULL);
- perf_ctx_sched_task_cb(cpuctx->task_ctx, true);
+ perf_ctx_sched_task_cb(cpuctx->task_ctx, task, true);
if (!RB_EMPTY_ROOT(&ctx->pinned_groups.tree))
perf_ctx_enable(&cpuctx->ctx, false);
--
2.38.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH V8 4/6] perf/x86/lbr: Fix shorter LBRs call stacks for the system-wide mode
2025-03-12 18:25 [PATCH V8 1/6] perf: Save PMU specific data in task_struct kan.liang
2025-03-12 18:25 ` [PATCH V8 2/6] perf: attach/detach PMU specific data kan.liang
2025-03-12 18:25 ` [PATCH V8 3/6] perf: Supply task information to sched_task() kan.liang
@ 2025-03-12 18:25 ` kan.liang
2025-03-12 18:25 ` [PATCH V8 5/6] perf/x86: Remove swap_task_ctx() kan.liang
` (2 subsequent siblings)
5 siblings, 0 replies; 11+ messages in thread
From: kan.liang @ 2025-03-12 18:25 UTC (permalink / raw)
To: peterz, mingo, tglx, bp, acme, namhyung, irogers, linux-kernel
Cc: ak, eranian, Kan Liang
From: Kan Liang <kan.liang@linux.intel.com>
In the system-wide mode, LBR callstacks are shorter in comparison to
the per-process mode.
LBR MSRs are reset during a context switch in the system-wide mode. For
the LBR call stack, the LBRs should be always saved/restored during a
context switch.
Use the space in task_struct to save/restore the LBR call stack data.
For a system-wide event, it's unnecessagy to update the
lbr_callstack_users for each threads. Add a variable in x86_pmu to
indicate whether the system-wide event is active.
Fixes: 76cb2c617f12 ("perf/x86/intel: Save/restore LBR stack during context switch")
Reported-by: Andi Kleen <ak@linux.intel.com>
Reported-by: Alexey Budankov <alexey.budankov@linux.intel.com>
Debugged-by: Alexey Budankov <alexey.budankov@linux.intel.com>
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
---
arch/x86/events/intel/lbr.c | 47 ++++++++++++++++++++++++++++++------
arch/x86/events/perf_event.h | 1 +
2 files changed, 40 insertions(+), 8 deletions(-)
diff --git a/arch/x86/events/intel/lbr.c b/arch/x86/events/intel/lbr.c
index dafeee216f3b..24719adbcd7e 100644
--- a/arch/x86/events/intel/lbr.c
+++ b/arch/x86/events/intel/lbr.c
@@ -422,11 +422,17 @@ static __always_inline bool lbr_is_reset_in_cstate(void *ctx)
return !rdlbr_from(((struct x86_perf_task_context *)ctx)->tos, NULL);
}
+static inline bool has_lbr_callstack_users(void *ctx)
+{
+ return task_context_opt(ctx)->lbr_callstack_users ||
+ x86_pmu.lbr_callstack_users;
+}
+
static void __intel_pmu_lbr_restore(void *ctx)
{
struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
- if (task_context_opt(ctx)->lbr_callstack_users == 0 ||
+ if (!has_lbr_callstack_users(ctx) ||
task_context_opt(ctx)->lbr_stack_state == LBR_NONE) {
intel_pmu_lbr_reset();
return;
@@ -503,7 +509,7 @@ static void __intel_pmu_lbr_save(void *ctx)
{
struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
- if (task_context_opt(ctx)->lbr_callstack_users == 0) {
+ if (!has_lbr_callstack_users(ctx)) {
task_context_opt(ctx)->lbr_stack_state = LBR_NONE;
return;
}
@@ -543,6 +549,7 @@ void intel_pmu_lbr_sched_task(struct perf_event_pmu_context *pmu_ctx,
struct task_struct *task, bool sched_in)
{
struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
+ struct perf_ctx_data *ctx_data;
void *task_ctx;
if (!cpuc->lbr_users)
@@ -553,14 +560,18 @@ void intel_pmu_lbr_sched_task(struct perf_event_pmu_context *pmu_ctx,
* the task was scheduled out, restore the stack. Otherwise flush
* the LBR stack.
*/
- task_ctx = pmu_ctx ? pmu_ctx->task_ctx_data : NULL;
+ rcu_read_lock();
+ ctx_data = rcu_dereference(task->perf_ctx_data);
+ task_ctx = ctx_data ? ctx_data->data : NULL;
if (task_ctx) {
if (sched_in)
__intel_pmu_lbr_restore(task_ctx);
else
__intel_pmu_lbr_save(task_ctx);
+ rcu_read_unlock();
return;
}
+ rcu_read_unlock();
/*
* Since a context switch can flip the address space and LBR entries
@@ -589,9 +600,19 @@ void intel_pmu_lbr_add(struct perf_event *event)
cpuc->br_sel = event->hw.branch_reg.reg;
- if (branch_user_callstack(cpuc->br_sel) && event->pmu_ctx->task_ctx_data)
- task_context_opt(event->pmu_ctx->task_ctx_data)->lbr_callstack_users++;
+ if (branch_user_callstack(cpuc->br_sel)) {
+ if (event->attach_state & PERF_ATTACH_TASK) {
+ struct task_struct *task = event->hw.target;
+ struct perf_ctx_data *ctx_data;
+ rcu_read_lock();
+ ctx_data = rcu_dereference(task->perf_ctx_data);
+ if (ctx_data)
+ task_context_opt(ctx_data->data)->lbr_callstack_users++;
+ rcu_read_unlock();
+ } else
+ x86_pmu.lbr_callstack_users++;
+ }
/*
* Request pmu::sched_task() callback, which will fire inside the
* regular perf event scheduling, so that call will:
@@ -665,9 +686,19 @@ void intel_pmu_lbr_del(struct perf_event *event)
if (!x86_pmu.lbr_nr)
return;
- if (branch_user_callstack(cpuc->br_sel) &&
- event->pmu_ctx->task_ctx_data)
- task_context_opt(event->pmu_ctx->task_ctx_data)->lbr_callstack_users--;
+ if (branch_user_callstack(cpuc->br_sel)) {
+ if (event->attach_state & PERF_ATTACH_TASK) {
+ struct task_struct *task = event->hw.target;
+ struct perf_ctx_data *ctx_data;
+
+ rcu_read_lock();
+ ctx_data = rcu_dereference(task->perf_ctx_data);
+ if (ctx_data)
+ task_context_opt(ctx_data->data)->lbr_callstack_users--;
+ rcu_read_unlock();
+ } else
+ x86_pmu.lbr_callstack_users--;
+ }
if (event->hw.flags & PERF_X86_EVENT_LBR_SELECT)
cpuc->lbr_select = 0;
diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
index 0d5019fb3ad2..67d2d250248c 100644
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -920,6 +920,7 @@ struct x86_pmu {
const int *lbr_sel_map; /* lbr_select mappings */
int *lbr_ctl_map; /* LBR_CTL mappings */
};
+ u64 lbr_callstack_users; /* lbr callstack system wide users */
bool lbr_double_abort; /* duplicated lbr aborts */
bool lbr_pt_coexist; /* (LBR|BTS) may coexist with PT */
--
2.38.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH V8 5/6] perf/x86: Remove swap_task_ctx()
2025-03-12 18:25 [PATCH V8 1/6] perf: Save PMU specific data in task_struct kan.liang
` (2 preceding siblings ...)
2025-03-12 18:25 ` [PATCH V8 4/6] perf/x86/lbr: Fix shorter LBRs call stacks for the system-wide mode kan.liang
@ 2025-03-12 18:25 ` kan.liang
2025-03-12 18:25 ` [PATCH V8 6/6] perf: Clean up pmu specific data kan.liang
2025-03-12 19:05 ` [PATCH V8 1/6] perf: Save PMU specific data in task_struct Peter Zijlstra
5 siblings, 0 replies; 11+ messages in thread
From: kan.liang @ 2025-03-12 18:25 UTC (permalink / raw)
To: peterz, mingo, tglx, bp, acme, namhyung, irogers, linux-kernel
Cc: ak, eranian, Kan Liang
From: Kan Liang <kan.liang@linux.intel.com>
The pmu specific data is saved in task_struct now. It doesn't need to
swap between context.
Remove swap_task_ctx() support.
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
---
arch/x86/events/core.c | 9 ---------
arch/x86/events/intel/core.c | 7 -------
arch/x86/events/intel/lbr.c | 23 -----------------------
arch/x86/events/perf_event.h | 11 -----------
4 files changed, 50 deletions(-)
diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index ae8c90adca0f..833478ffbbf5 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -87,7 +87,6 @@ DEFINE_STATIC_CALL_NULL(x86_pmu_commit_scheduling, *x86_pmu.commit_scheduling);
DEFINE_STATIC_CALL_NULL(x86_pmu_stop_scheduling, *x86_pmu.stop_scheduling);
DEFINE_STATIC_CALL_NULL(x86_pmu_sched_task, *x86_pmu.sched_task);
-DEFINE_STATIC_CALL_NULL(x86_pmu_swap_task_ctx, *x86_pmu.swap_task_ctx);
DEFINE_STATIC_CALL_NULL(x86_pmu_drain_pebs, *x86_pmu.drain_pebs);
DEFINE_STATIC_CALL_NULL(x86_pmu_pebs_aliases, *x86_pmu.pebs_aliases);
@@ -2039,7 +2038,6 @@ static void x86_pmu_static_call_update(void)
static_call_update(x86_pmu_stop_scheduling, x86_pmu.stop_scheduling);
static_call_update(x86_pmu_sched_task, x86_pmu.sched_task);
- static_call_update(x86_pmu_swap_task_ctx, x86_pmu.swap_task_ctx);
static_call_update(x86_pmu_drain_pebs, x86_pmu.drain_pebs);
static_call_update(x86_pmu_pebs_aliases, x86_pmu.pebs_aliases);
@@ -2644,12 +2642,6 @@ static void x86_pmu_sched_task(struct perf_event_pmu_context *pmu_ctx,
static_call_cond(x86_pmu_sched_task)(pmu_ctx, task, sched_in);
}
-static void x86_pmu_swap_task_ctx(struct perf_event_pmu_context *prev_epc,
- struct perf_event_pmu_context *next_epc)
-{
- static_call_cond(x86_pmu_swap_task_ctx)(prev_epc, next_epc);
-}
-
void perf_check_microcode(void)
{
if (x86_pmu.check_microcode)
@@ -2714,7 +2706,6 @@ static struct pmu pmu = {
.event_idx = x86_pmu_event_idx,
.sched_task = x86_pmu_sched_task,
- .swap_task_ctx = x86_pmu_swap_task_ctx,
.check_period = x86_pmu_check_period,
.aux_output_match = x86_pmu_aux_output_match,
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 3efbb03fd77e..dc38dec244c1 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -5300,12 +5300,6 @@ static void intel_pmu_sched_task(struct perf_event_pmu_context *pmu_ctx,
intel_pmu_lbr_sched_task(pmu_ctx, task, sched_in);
}
-static void intel_pmu_swap_task_ctx(struct perf_event_pmu_context *prev_epc,
- struct perf_event_pmu_context *next_epc)
-{
- intel_pmu_lbr_swap_task_ctx(prev_epc, next_epc);
-}
-
static int intel_pmu_check_period(struct perf_event *event, u64 value)
{
return intel_pmu_has_bts_period(event, value) ? -EINVAL : 0;
@@ -5474,7 +5468,6 @@ static __initconst const struct x86_pmu intel_pmu = {
.guest_get_msrs = intel_guest_get_msrs,
.sched_task = intel_pmu_sched_task,
- .swap_task_ctx = intel_pmu_swap_task_ctx,
.check_period = intel_pmu_check_period,
diff --git a/arch/x86/events/intel/lbr.c b/arch/x86/events/intel/lbr.c
index 24719adbcd7e..f44c3d866f24 100644
--- a/arch/x86/events/intel/lbr.c
+++ b/arch/x86/events/intel/lbr.c
@@ -522,29 +522,6 @@ static void __intel_pmu_lbr_save(void *ctx)
cpuc->last_log_id = ++task_context_opt(ctx)->log_id;
}
-void intel_pmu_lbr_swap_task_ctx(struct perf_event_pmu_context *prev_epc,
- struct perf_event_pmu_context *next_epc)
-{
- void *prev_ctx_data, *next_ctx_data;
-
- swap(prev_epc->task_ctx_data, next_epc->task_ctx_data);
-
- /*
- * Architecture specific synchronization makes sense in case
- * both prev_epc->task_ctx_data and next_epc->task_ctx_data
- * pointers are allocated.
- */
-
- prev_ctx_data = next_epc->task_ctx_data;
- next_ctx_data = prev_epc->task_ctx_data;
-
- if (!prev_ctx_data || !next_ctx_data)
- return;
-
- swap(task_context_opt(prev_ctx_data)->lbr_callstack_users,
- task_context_opt(next_ctx_data)->lbr_callstack_users);
-}
-
void intel_pmu_lbr_sched_task(struct perf_event_pmu_context *pmu_ctx,
struct task_struct *task, bool sched_in)
{
diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
index 67d2d250248c..8e5a4c3c5b95 100644
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -958,14 +958,6 @@ struct x86_pmu {
*/
int num_topdown_events;
- /*
- * perf task context (i.e. struct perf_event_pmu_context::task_ctx_data)
- * switch helper to bridge calls from perf/core to perf/x86.
- * See struct pmu::swap_task_ctx() usage for examples;
- */
- void (*swap_task_ctx)(struct perf_event_pmu_context *prev_epc,
- struct perf_event_pmu_context *next_epc);
-
/*
* AMD bits
*/
@@ -1671,9 +1663,6 @@ void intel_pmu_lbr_save_brstack(struct perf_sample_data *data,
struct cpu_hw_events *cpuc,
struct perf_event *event);
-void intel_pmu_lbr_swap_task_ctx(struct perf_event_pmu_context *prev_epc,
- struct perf_event_pmu_context *next_epc);
-
void intel_pmu_lbr_sched_task(struct perf_event_pmu_context *pmu_ctx,
struct task_struct *task, bool sched_in);
--
2.38.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH V8 6/6] perf: Clean up pmu specific data
2025-03-12 18:25 [PATCH V8 1/6] perf: Save PMU specific data in task_struct kan.liang
` (3 preceding siblings ...)
2025-03-12 18:25 ` [PATCH V8 5/6] perf/x86: Remove swap_task_ctx() kan.liang
@ 2025-03-12 18:25 ` kan.liang
2025-03-12 19:05 ` [PATCH V8 1/6] perf: Save PMU specific data in task_struct Peter Zijlstra
5 siblings, 0 replies; 11+ messages in thread
From: kan.liang @ 2025-03-12 18:25 UTC (permalink / raw)
To: peterz, mingo, tglx, bp, acme, namhyung, irogers, linux-kernel
Cc: ak, eranian, Kan Liang
From: Kan Liang <kan.liang@linux.intel.com>
The pmu specific data is saved in task_struct now. Remove it from event
context structure.
Remove swap_task_ctx() as well.
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
---
include/linux/perf_event.h | 12 ------
kernel/events/core.c | 76 ++------------------------------------
2 files changed, 3 insertions(+), 85 deletions(-)
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index beb6799d80d0..c22bc7214d99 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -501,16 +501,6 @@ struct pmu {
*/
struct kmem_cache *task_ctx_cache;
- /*
- * PMU specific parts of task perf event context (i.e. ctx->task_ctx_data)
- * can be synchronized using this function. See Intel LBR callstack support
- * implementation and Perf core context switch handling callbacks for usage
- * examples.
- */
- void (*swap_task_ctx) (struct perf_event_pmu_context *prev_epc,
- struct perf_event_pmu_context *next_epc);
- /* optional */
-
/*
* Set up pmu-private data structures for an AUX area
*/
@@ -932,7 +922,6 @@ struct perf_event_pmu_context {
atomic_t refcount; /* event <-> epc */
struct rcu_head rcu_head;
- void *task_ctx_data; /* pmu specific data */
/*
* Set when one or more (plausibly active) event can't be scheduled
* due to pmu overcommit or pmu constraints, except tolerant to
@@ -980,7 +969,6 @@ struct perf_event_context {
int nr_user;
int is_active;
- int nr_task_data;
int nr_stat;
int nr_freq;
int rotate_disable;
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 7b31ae194a08..0c749b3bce86 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -1254,20 +1254,6 @@ static void get_ctx(struct perf_event_context *ctx)
refcount_inc(&ctx->refcount);
}
-static void *alloc_task_ctx_data(struct pmu *pmu)
-{
- if (pmu->task_ctx_cache)
- return kmem_cache_zalloc(pmu->task_ctx_cache, GFP_KERNEL);
-
- return NULL;
-}
-
-static void free_task_ctx_data(struct pmu *pmu, void *task_ctx_data)
-{
- if (pmu->task_ctx_cache && task_ctx_data)
- kmem_cache_free(pmu->task_ctx_cache, task_ctx_data);
-}
-
static void free_ctx(struct rcu_head *head)
{
struct perf_event_context *ctx;
@@ -3577,42 +3563,6 @@ static void perf_event_sync_stat(struct perf_event_context *ctx,
}
}
-#define double_list_for_each_entry(pos1, pos2, head1, head2, member) \
- for (pos1 = list_first_entry(head1, typeof(*pos1), member), \
- pos2 = list_first_entry(head2, typeof(*pos2), member); \
- !list_entry_is_head(pos1, head1, member) && \
- !list_entry_is_head(pos2, head2, member); \
- pos1 = list_next_entry(pos1, member), \
- pos2 = list_next_entry(pos2, member))
-
-static void perf_event_swap_task_ctx_data(struct perf_event_context *prev_ctx,
- struct perf_event_context *next_ctx)
-{
- struct perf_event_pmu_context *prev_epc, *next_epc;
-
- if (!prev_ctx->nr_task_data)
- return;
-
- double_list_for_each_entry(prev_epc, next_epc,
- &prev_ctx->pmu_ctx_list, &next_ctx->pmu_ctx_list,
- pmu_ctx_entry) {
-
- if (WARN_ON_ONCE(prev_epc->pmu != next_epc->pmu))
- continue;
-
- /*
- * PMU specific parts of task perf context can require
- * additional synchronization. As an example of such
- * synchronization see implementation details of Intel
- * LBR call stack data profiling;
- */
- if (prev_epc->pmu->swap_task_ctx)
- prev_epc->pmu->swap_task_ctx(prev_epc, next_epc);
- else
- swap(prev_epc->task_ctx_data, next_epc->task_ctx_data);
- }
-}
-
static void perf_ctx_sched_task_cb(struct perf_event_context *ctx,
struct task_struct *task, bool sched_in)
{
@@ -3687,16 +3637,15 @@ perf_event_context_sched_out(struct task_struct *task, struct task_struct *next)
WRITE_ONCE(next_ctx->task, task);
perf_ctx_sched_task_cb(ctx, task, false);
- perf_event_swap_task_ctx_data(ctx, next_ctx);
perf_ctx_enable(ctx, false);
/*
* RCU_INIT_POINTER here is safe because we've not
* modified the ctx and the above modification of
- * ctx->task and ctx->task_ctx_data are immaterial
- * since those values are always verified under
- * ctx->lock which we're now holding.
+ * ctx->task is immaterial since this value is
+ * always verified under ctx->lock which we're now
+ * holding.
*/
RCU_INIT_POINTER(task->perf_event_ctxp, next_ctx);
RCU_INIT_POINTER(next->perf_event_ctxp, ctx);
@@ -5000,7 +4949,6 @@ find_get_pmu_context(struct pmu *pmu, struct perf_event_context *ctx,
struct perf_event *event)
{
struct perf_event_pmu_context *new = NULL, *pos = NULL, *epc;
- void *task_ctx_data = NULL;
if (!ctx->task) {
/*
@@ -5033,14 +4981,6 @@ find_get_pmu_context(struct pmu *pmu, struct perf_event_context *ctx,
if (!new)
return ERR_PTR(-ENOMEM);
- if (event->attach_state & PERF_ATTACH_TASK_DATA) {
- task_ctx_data = alloc_task_ctx_data(pmu);
- if (!task_ctx_data) {
- kfree(new);
- return ERR_PTR(-ENOMEM);
- }
- }
-
__perf_init_event_pmu_context(new, pmu);
/*
@@ -5075,14 +5015,7 @@ find_get_pmu_context(struct pmu *pmu, struct perf_event_context *ctx,
epc->ctx = ctx;
found_epc:
- if (task_ctx_data && !epc->task_ctx_data) {
- epc->task_ctx_data = task_ctx_data;
- task_ctx_data = NULL;
- ctx->nr_task_data++;
- }
raw_spin_unlock_irq(&ctx->lock);
-
- free_task_ctx_data(pmu, task_ctx_data);
kfree(new);
return epc;
@@ -5098,7 +5031,6 @@ static void free_cpc_rcu(struct rcu_head *head)
struct perf_cpu_pmu_context *cpc =
container_of(head, typeof(*cpc), epc.rcu_head);
- kfree(cpc->epc.task_ctx_data);
kfree(cpc);
}
@@ -5106,7 +5038,6 @@ static void free_epc_rcu(struct rcu_head *head)
{
struct perf_event_pmu_context *epc = container_of(head, typeof(*epc), rcu_head);
- kfree(epc->task_ctx_data);
kfree(epc);
}
@@ -14092,7 +14023,6 @@ inherit_event(struct perf_event *parent_event,
if (is_orphaned_event(parent_event) ||
!atomic_long_inc_not_zero(&parent_event->refcount)) {
mutex_unlock(&parent_event->child_mutex);
- /* task_ctx_data is freed with child_ctx */
free_event(child_event);
return NULL;
}
--
2.38.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH V8 1/6] perf: Save PMU specific data in task_struct
2025-03-12 18:25 [PATCH V8 1/6] perf: Save PMU specific data in task_struct kan.liang
` (4 preceding siblings ...)
2025-03-12 18:25 ` [PATCH V8 6/6] perf: Clean up pmu specific data kan.liang
@ 2025-03-12 19:05 ` Peter Zijlstra
2025-03-12 19:41 ` Liang, Kan
5 siblings, 1 reply; 11+ messages in thread
From: Peter Zijlstra @ 2025-03-12 19:05 UTC (permalink / raw)
To: kan.liang
Cc: mingo, tglx, bp, acme, namhyung, irogers, linux-kernel, ak,
eranian
I'm sorry, but since I spotted a bug in the second patch, I'm going to
reply and suggest some overall changes.
On Wed, Mar 12, 2025 at 11:25:20AM -0700, kan.liang@linux.intel.com wrote:
> diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
> index 3e270822b915..b8442047a2b6 100644
> --- a/include/linux/perf_event.h
> +++ b/include/linux/perf_event.h
> @@ -1021,6 +1021,36 @@ struct perf_event_context {
> local_t nr_no_switch_fast;
> };
>
> +/**
> + * struct perf_ctx_data - PMU specific data for a task
> + * @rcu_head: To avoid the race on free PMU specific data
> + * @refcount: To track users
> + * @global: To track system-wide users
> + * @ctx_cache: Kmem cache of PMU specific data
> + * @data: PMU specific data
> + *
> + * Currently, the struct is only used in Intel LBR call stack mode to
> + * save/restore the call stack of a task on context switches.
> + * The data only be allocated when Intel LBR call stack mode is enabled.
> + * The data will be freed when the mode is disabled. The rcu_head is
> + * used to prevent the race on free the data.
> + * The content of the data will only be accessed in context switch, which
> + * should be protected by rcu_read_lock().
> + *
> + * Careful: Struct perf_ctx_data is added as a pointor in struct task_struct.
pointer
> + * When system-wide Intel LBR call stack mode is enabled, a buffer with
> + * constant size will be allocated for each task.
> + * Also, system memory consumption can further grow when the size of
> + * struct perf_ctx_data enlarges.
> + */
> +struct perf_ctx_data {
> + struct rcu_head rcu_head;
> + refcount_t refcount;
> + int global;
> + struct kmem_cache *ctx_cache;
> + void *data;
> +};
I can't remember why this is complicated like this. Why do we have a
kmemcache and yet another data pointer in there?
Specifically, why can't we do something like:
struct perf_ctx_data {
struct rcu_head rcu;
refcount_t refcount;
int global;
char data[];
};
and simply allocate the whole thing as a single allocation?
So then the allocation is something like:
cd = kzalloc(sizeof(*cd) + event->pmu->task_ctx_size, GFP_KERNEL);
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH V8 2/6] perf: attach/detach PMU specific data
2025-03-12 18:25 ` [PATCH V8 2/6] perf: attach/detach PMU specific data kan.liang
@ 2025-03-12 19:18 ` Peter Zijlstra
2025-03-12 19:52 ` Liang, Kan
0 siblings, 1 reply; 11+ messages in thread
From: Peter Zijlstra @ 2025-03-12 19:18 UTC (permalink / raw)
To: kan.liang
Cc: mingo, tglx, bp, acme, namhyung, irogers, linux-kernel, ak,
eranian
On Wed, Mar 12, 2025 at 11:25:21AM -0700, kan.liang@linux.intel.com wrote:
> +static int
> +attach_global_ctx_data(struct kmem_cache *ctx_cache)
> +{
> + if (refcount_inc_not_zero(&global_ctx_data_ref))
> + return 0;
> +
> + percpu_down_write(&global_ctx_data_rwsem);
> + if (!refcount_inc_not_zero(&global_ctx_data_ref)) {
> + struct task_struct *g, *p;
> + struct perf_ctx_data *cd;
> + int ret;
> +
> +again:
> + /* Allocate everything */
> + rcu_read_lock();
> + for_each_process_thread(g, p) {
> + cd = rcu_dereference(p->perf_ctx_data);
> + if (cd && !cd->global) {
> + cd->global = 1;
> + if (!refcount_inc_not_zero(&cd->refcount))
> + cd = NULL;
> + }
> + if (!cd) {
> + get_task_struct(p);
> + rcu_read_unlock();
> +
> + ret = attach_task_ctx_data(p, ctx_cache, true);
> + put_task_struct(p);
> + if (ret) {
> + __detach_global_ctx_data();
> + return ret;
AFAICT this returns with global_ctx_data_rwsem taken, no?
> + }
> + goto again;
> + }
> + }
> + rcu_read_unlock();
> +
> + refcount_set(&global_ctx_data_ref, 1);
> + }
> + percpu_up_write(&global_ctx_data_rwsem);
> +
> + return 0;
> +}
Can we rework this with guards? A little something like so?
---
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -5233,18 +5233,20 @@ static refcount_t global_ctx_data_ref;
static int
attach_global_ctx_data(struct kmem_cache *ctx_cache)
{
+ struct task_struct *g, *p;
+ struct perf_ctx_data *cd;
+ int ret;
+
if (refcount_inc_not_zero(&global_ctx_data_ref))
return 0;
- percpu_down_write(&global_ctx_data_rwsem);
- if (!refcount_inc_not_zero(&global_ctx_data_ref)) {
- struct task_struct *g, *p;
- struct perf_ctx_data *cd;
- int ret;
+ guard(percpu_write)(&global_ctx_data_rwsem);
+ if (refcount_inc_not_zero(&global_ctx_data_ref))
+ return 0;
again:
- /* Allocate everything */
- rcu_read_lock();
+ /* Allocate everything */
+ scoped_guard (rcu) {
for_each_process_thread(g, p) {
cd = rcu_dereference(p->perf_ctx_data);
if (cd && !cd->global) {
@@ -5254,24 +5256,23 @@ attach_global_ctx_data(struct kmem_cache
}
if (!cd) {
get_task_struct(p);
- rcu_read_unlock();
-
- ret = attach_task_ctx_data(p, ctx_cache, true);
- put_task_struct(p);
- if (ret) {
- __detach_global_ctx_data();
- return ret;
- }
- goto again;
+ goto alloc;
}
}
- rcu_read_unlock();
-
- refcount_set(&global_ctx_data_ref, 1);
}
- percpu_up_write(&global_ctx_data_rwsem);
+
+ refcount_set(&global_ctx_data_ref, 1);
return 0;
+
+alloc:
+ ret = attach_task_ctx_data(p, ctx_cache, true);
+ put_task_struct(p);
+ if (ret) {
+ __detach_global_ctx_data();
+ return ret;
+ }
+ goto again;
}
static int
@@ -5338,15 +5339,12 @@ static void detach_global_ctx_data(void)
if (refcount_dec_not_one(&global_ctx_data_ref))
return;
- percpu_down_write(&global_ctx_data_rwsem);
+ guard(perpcu_write)(&global_ctx_data_rwsem);
if (!refcount_dec_and_test(&global_ctx_data_ref))
- goto unlock;
+ return;
/* remove everything */
__detach_global_ctx_data();
-
-unlock:
- percpu_up_write(&global_ctx_data_rwsem);
}
static void detach_perf_ctx_data(struct perf_event *event)
@@ -8776,9 +8774,9 @@ perf_event_alloc_task_data(struct task_s
if (!ctx_cache)
return;
- percpu_down_read(&global_ctx_data_rwsem);
+ guard(percpu_read)(&global_ctx_data_rwsem);
+ guard(rcu)();
- rcu_read_lock();
cd = rcu_dereference(child->perf_ctx_data);
if (!cd) {
@@ -8787,21 +8785,16 @@ perf_event_alloc_task_data(struct task_s
* when attaching the perf_ctx_data.
*/
if (!refcount_read(&global_ctx_data_ref))
- goto rcu_unlock;
+ return;
rcu_read_unlock();
attach_task_ctx_data(child, ctx_cache, true);
- goto up_rwsem;
+ return;
}
if (!cd->global) {
cd->global = 1;
refcount_inc(&cd->refcount);
}
-
-rcu_unlock:
- rcu_read_unlock();
-up_rwsem:
- percpu_up_read(&global_ctx_data_rwsem);
}
void perf_event_fork(struct task_struct *task)
@@ -13845,9 +13838,8 @@ void perf_event_exit_task(struct task_st
/*
* Detach the perf_ctx_data for the system-wide event.
*/
- percpu_down_read(&global_ctx_data_rwsem);
+ guard(percpu_read)(&global_ctx_data_rwsem);
detach_task_ctx_data(child);
- percpu_up_read(&global_ctx_data_rwsem);
}
static void perf_free_event(struct perf_event *event,
diff --git a/include/linux/percpu-rwsem.h b/include/linux/percpu-rwsem.h
index c012df33a9f0..36f3082f2d82 100644
--- a/include/linux/percpu-rwsem.h
+++ b/include/linux/percpu-rwsem.h
@@ -8,6 +8,7 @@
#include <linux/wait.h>
#include <linux/rcu_sync.h>
#include <linux/lockdep.h>
+#include <linux/cleanup.h>
struct percpu_rw_semaphore {
struct rcu_sync rss;
@@ -125,6 +126,13 @@ extern bool percpu_is_read_locked(struct percpu_rw_semaphore *);
extern void percpu_down_write(struct percpu_rw_semaphore *);
extern void percpu_up_write(struct percpu_rw_semaphore *);
+DEFINE_GUARD(percpu_read, struct perpcu_rw_semaphore *,
+ perpcu_down_read(_T), percpu_up_read(_T))
+DEFINE_GUARD_COND(perpcu_read, _try, percpu_down_read_trylock(_T))
+
+DEFINE_GUARD(percpu_write, struct percpu_rw_semaphore *,
+ percpu_down_write(_T), perpcu_up_write(_T))
+
static inline bool percpu_is_write_locked(struct percpu_rw_semaphore *sem)
{
return atomic_read(&sem->block);
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH V8 1/6] perf: Save PMU specific data in task_struct
2025-03-12 19:05 ` [PATCH V8 1/6] perf: Save PMU specific data in task_struct Peter Zijlstra
@ 2025-03-12 19:41 ` Liang, Kan
2025-03-12 19:43 ` Peter Zijlstra
0 siblings, 1 reply; 11+ messages in thread
From: Liang, Kan @ 2025-03-12 19:41 UTC (permalink / raw)
To: Peter Zijlstra
Cc: mingo, tglx, bp, acme, namhyung, irogers, linux-kernel, ak,
eranian
On 2025-03-12 3:05 p.m., Peter Zijlstra wrote:
>
> I'm sorry, but since I spotted a bug in the second patch, I'm going to
> reply and suggest some overall changes.
Sure. Thanks.
>
> On Wed, Mar 12, 2025 at 11:25:20AM -0700, kan.liang@linux.intel.com wrote:
>
>> diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
>> index 3e270822b915..b8442047a2b6 100644
>> --- a/include/linux/perf_event.h
>> +++ b/include/linux/perf_event.h
>> @@ -1021,6 +1021,36 @@ struct perf_event_context {
>> local_t nr_no_switch_fast;
>> };
>>
>> +/**
>> + * struct perf_ctx_data - PMU specific data for a task
>> + * @rcu_head: To avoid the race on free PMU specific data
>> + * @refcount: To track users
>> + * @global: To track system-wide users
>> + * @ctx_cache: Kmem cache of PMU specific data
>> + * @data: PMU specific data
>> + *
>> + * Currently, the struct is only used in Intel LBR call stack mode to
>> + * save/restore the call stack of a task on context switches.
>> + * The data only be allocated when Intel LBR call stack mode is enabled.
>> + * The data will be freed when the mode is disabled. The rcu_head is
>> + * used to prevent the race on free the data.
>> + * The content of the data will only be accessed in context switch, which
>> + * should be protected by rcu_read_lock().
>> + *
>> + * Careful: Struct perf_ctx_data is added as a pointor in struct task_struct.
>
> pointer
>
>> + * When system-wide Intel LBR call stack mode is enabled, a buffer with
>> + * constant size will be allocated for each task.
>> + * Also, system memory consumption can further grow when the size of
>> + * struct perf_ctx_data enlarges.
>> + */
>> +struct perf_ctx_data {
>> + struct rcu_head rcu_head;
>> + refcount_t refcount;
>> + int global;
>> + struct kmem_cache *ctx_cache;
>> + void *data;
>> +};
>
> I can't remember why this is complicated like this. Why do we have a
> kmemcache and yet another data pointer in there?
The kmem_cache is introduced to address the alignment requirement for
Arch LBR.
https://lore.kernel.org/lkml/159420190705.4006.11190540790919295173.tip-bot2@tip-bot2/
When users do system-wide profiling, perf has to allocate a buffer when
forking a thread or delete a buffer when deleting a thread. The
pmu->task_ctx_cache is required. Perf has to search the perf_event_list
every time to find the proper PMU.
So the *ctx_cache is introduced to avoid the search.
Thanks,
Kan
>
> Specifically, why can't we do something like:
>
> struct perf_ctx_data {
> struct rcu_head rcu;
> refcount_t refcount;
> int global;
> char data[];
> };
>
> and simply allocate the whole thing as a single allocation?
>
> So then the allocation is something like:
>
> cd = kzalloc(sizeof(*cd) + event->pmu->task_ctx_size, GFP_KERNEL);
>
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH V8 1/6] perf: Save PMU specific data in task_struct
2025-03-12 19:41 ` Liang, Kan
@ 2025-03-12 19:43 ` Peter Zijlstra
0 siblings, 0 replies; 11+ messages in thread
From: Peter Zijlstra @ 2025-03-12 19:43 UTC (permalink / raw)
To: Liang, Kan
Cc: mingo, tglx, bp, acme, namhyung, irogers, linux-kernel, ak,
eranian
On Wed, Mar 12, 2025 at 03:41:06PM -0400, Liang, Kan wrote:
> The kmem_cache is introduced to address the alignment requirement for
> Arch LBR.
> https://lore.kernel.org/lkml/159420190705.4006.11190540790919295173.tip-bot2@tip-bot2/
Urgh, okay. Please stick that in a comment somewhere.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH V8 2/6] perf: attach/detach PMU specific data
2025-03-12 19:18 ` Peter Zijlstra
@ 2025-03-12 19:52 ` Liang, Kan
0 siblings, 0 replies; 11+ messages in thread
From: Liang, Kan @ 2025-03-12 19:52 UTC (permalink / raw)
To: Peter Zijlstra
Cc: mingo, tglx, bp, acme, namhyung, irogers, linux-kernel, ak,
eranian
On 2025-03-12 3:18 p.m., Peter Zijlstra wrote:
> On Wed, Mar 12, 2025 at 11:25:21AM -0700, kan.liang@linux.intel.com wrote:
>
>> +static int
>> +attach_global_ctx_data(struct kmem_cache *ctx_cache)
>> +{
>> + if (refcount_inc_not_zero(&global_ctx_data_ref))
>> + return 0;
>> +
>> + percpu_down_write(&global_ctx_data_rwsem);
>> + if (!refcount_inc_not_zero(&global_ctx_data_ref)) {
>> + struct task_struct *g, *p;
>> + struct perf_ctx_data *cd;
>> + int ret;
>> +
>> +again:
>> + /* Allocate everything */
>> + rcu_read_lock();
>> + for_each_process_thread(g, p) {
>> + cd = rcu_dereference(p->perf_ctx_data);
>> + if (cd && !cd->global) {
>> + cd->global = 1;
>> + if (!refcount_inc_not_zero(&cd->refcount))
>> + cd = NULL;
>> + }
>> + if (!cd) {
>> + get_task_struct(p);
>> + rcu_read_unlock();
>> +
>> + ret = attach_task_ctx_data(p, ctx_cache, true);
>> + put_task_struct(p);
>> + if (ret) {
>> + __detach_global_ctx_data();
>> + return ret;
>
> AFAICT this returns with global_ctx_data_rwsem taken, no?
Ah, yes
>
>> + }
>> + goto again;
>> + }
>> + }
>> + rcu_read_unlock();
>> +
>> + refcount_set(&global_ctx_data_ref, 1);
>> + }
>> + percpu_up_write(&global_ctx_data_rwsem);
>> +
>> + return 0;
>> +}
>
> Can we rework this with guards? A little something like so?
>
Yes. I will do more test and send a V9.
Thanks,
Kan
> ---
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -5233,18 +5233,20 @@ static refcount_t global_ctx_data_ref;
> static int
> attach_global_ctx_data(struct kmem_cache *ctx_cache)
> {
> + struct task_struct *g, *p;
> + struct perf_ctx_data *cd;
> + int ret;
> +
> if (refcount_inc_not_zero(&global_ctx_data_ref))
> return 0;
>
> - percpu_down_write(&global_ctx_data_rwsem);
> - if (!refcount_inc_not_zero(&global_ctx_data_ref)) {
> - struct task_struct *g, *p;
> - struct perf_ctx_data *cd;
> - int ret;
> + guard(percpu_write)(&global_ctx_data_rwsem);
> + if (refcount_inc_not_zero(&global_ctx_data_ref))
> + return 0;
>
> again:
> - /* Allocate everything */
> - rcu_read_lock();
> + /* Allocate everything */
> + scoped_guard (rcu) {
> for_each_process_thread(g, p) {
> cd = rcu_dereference(p->perf_ctx_data);
> if (cd && !cd->global) {
> @@ -5254,24 +5256,23 @@ attach_global_ctx_data(struct kmem_cache
> }
> if (!cd) {
> get_task_struct(p);
> - rcu_read_unlock();
> -
> - ret = attach_task_ctx_data(p, ctx_cache, true);
> - put_task_struct(p);
> - if (ret) {
> - __detach_global_ctx_data();
> - return ret;
> - }
> - goto again;
> + goto alloc;
> }
> }
> - rcu_read_unlock();
> -
> - refcount_set(&global_ctx_data_ref, 1);
> }
> - percpu_up_write(&global_ctx_data_rwsem);
> +
> + refcount_set(&global_ctx_data_ref, 1);
>
> return 0;
> +
> +alloc:
> + ret = attach_task_ctx_data(p, ctx_cache, true);
> + put_task_struct(p);
> + if (ret) {
> + __detach_global_ctx_data();
> + return ret;
> + }
> + goto again;
> }
>
> static int
> @@ -5338,15 +5339,12 @@ static void detach_global_ctx_data(void)
> if (refcount_dec_not_one(&global_ctx_data_ref))
> return;
>
> - percpu_down_write(&global_ctx_data_rwsem);
> + guard(perpcu_write)(&global_ctx_data_rwsem);
> if (!refcount_dec_and_test(&global_ctx_data_ref))
> - goto unlock;
> + return;
>
> /* remove everything */
> __detach_global_ctx_data();
> -
> -unlock:
> - percpu_up_write(&global_ctx_data_rwsem);
> }
>
> static void detach_perf_ctx_data(struct perf_event *event)
> @@ -8776,9 +8774,9 @@ perf_event_alloc_task_data(struct task_s
> if (!ctx_cache)
> return;
>
> - percpu_down_read(&global_ctx_data_rwsem);
> + guard(percpu_read)(&global_ctx_data_rwsem);
> + guard(rcu)();
>
> - rcu_read_lock();
> cd = rcu_dereference(child->perf_ctx_data);
>
> if (!cd) {
> @@ -8787,21 +8785,16 @@ perf_event_alloc_task_data(struct task_s
> * when attaching the perf_ctx_data.
> */
> if (!refcount_read(&global_ctx_data_ref))
> - goto rcu_unlock;
> + return;
> rcu_read_unlock();
> attach_task_ctx_data(child, ctx_cache, true);
> - goto up_rwsem;
> + return;
> }
>
> if (!cd->global) {
> cd->global = 1;
> refcount_inc(&cd->refcount);
> }
> -
> -rcu_unlock:
> - rcu_read_unlock();
> -up_rwsem:
> - percpu_up_read(&global_ctx_data_rwsem);
> }
>
> void perf_event_fork(struct task_struct *task)
> @@ -13845,9 +13838,8 @@ void perf_event_exit_task(struct task_st
> /*
> * Detach the perf_ctx_data for the system-wide event.
> */
> - percpu_down_read(&global_ctx_data_rwsem);
> + guard(percpu_read)(&global_ctx_data_rwsem);
> detach_task_ctx_data(child);
> - percpu_up_read(&global_ctx_data_rwsem);
> }
>
> static void perf_free_event(struct perf_event *event,
> diff --git a/include/linux/percpu-rwsem.h b/include/linux/percpu-rwsem.h
> index c012df33a9f0..36f3082f2d82 100644
> --- a/include/linux/percpu-rwsem.h
> +++ b/include/linux/percpu-rwsem.h
> @@ -8,6 +8,7 @@
> #include <linux/wait.h>
> #include <linux/rcu_sync.h>
> #include <linux/lockdep.h>
> +#include <linux/cleanup.h>
>
> struct percpu_rw_semaphore {
> struct rcu_sync rss;
> @@ -125,6 +126,13 @@ extern bool percpu_is_read_locked(struct percpu_rw_semaphore *);
> extern void percpu_down_write(struct percpu_rw_semaphore *);
> extern void percpu_up_write(struct percpu_rw_semaphore *);
>
> +DEFINE_GUARD(percpu_read, struct perpcu_rw_semaphore *,
> + perpcu_down_read(_T), percpu_up_read(_T))
> +DEFINE_GUARD_COND(perpcu_read, _try, percpu_down_read_trylock(_T))
> +
> +DEFINE_GUARD(percpu_write, struct percpu_rw_semaphore *,
> + percpu_down_write(_T), perpcu_up_write(_T))
> +
> static inline bool percpu_is_write_locked(struct percpu_rw_semaphore *sem)
> {
> return atomic_read(&sem->block);
>
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2025-03-12 19:52 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-03-12 18:25 [PATCH V8 1/6] perf: Save PMU specific data in task_struct kan.liang
2025-03-12 18:25 ` [PATCH V8 2/6] perf: attach/detach PMU specific data kan.liang
2025-03-12 19:18 ` Peter Zijlstra
2025-03-12 19:52 ` Liang, Kan
2025-03-12 18:25 ` [PATCH V8 3/6] perf: Supply task information to sched_task() kan.liang
2025-03-12 18:25 ` [PATCH V8 4/6] perf/x86/lbr: Fix shorter LBRs call stacks for the system-wide mode kan.liang
2025-03-12 18:25 ` [PATCH V8 5/6] perf/x86: Remove swap_task_ctx() kan.liang
2025-03-12 18:25 ` [PATCH V8 6/6] perf: Clean up pmu specific data kan.liang
2025-03-12 19:05 ` [PATCH V8 1/6] perf: Save PMU specific data in task_struct Peter Zijlstra
2025-03-12 19:41 ` Liang, Kan
2025-03-12 19:43 ` Peter Zijlstra
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox