* [PATCH RFC 1/8] perf/core: Add *group_leader to perf_event_create_kernel_counter()
[not found] <20221212125844.41157-1-likexu@tencent.com>
@ 2022-12-12 12:58 ` Like Xu
2022-12-12 13:23 ` Marc Zyngier
2022-12-14 3:52 ` Ravi Bangoria
2022-12-12 12:58 ` [PATCH RFC 2/8] perf: x86/core: Expose the available number of the Topdown metrics Like Xu
2022-12-12 12:58 ` [PATCH RFC 3/8] perf: x86/core: Snyc PERF_METRICS bit together with fixed counter3 Like Xu
2 siblings, 2 replies; 7+ messages in thread
From: Like Xu @ 2022-12-12 12:58 UTC (permalink / raw)
To: Peter Zijlstra, Sean Christopherson
Cc: Paolo Bonzini, linux-kernel, kvm, Marc Zyngier, Fenghua Yu,
kvmarm, linux-perf-users
From: Like Xu <likexu@tencent.com>
Like syscalls users, kernel-space perf_event creators may also use group
counters abstraction to gain pmu functionalities, and an in-kernel counter
groups behave much like normal 'single' counters, following the group
semantics-based behavior.
No functional changes at this time. An example will be that KVM creates
Intel slot event as group leader and other topdown metric events to emulate
MSR_PERF_METRICS pmu capability for guests.
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: kvmarm@lists.linux.dev
Cc: linux-perf-users@vger.kernel.org
Signed-off-by: Like Xu <likexu@tencent.com>
---
arch/arm64/kvm/pmu-emul.c | 4 ++--
arch/x86/kernel/cpu/resctrl/pseudo_lock.c | 4 ++--
arch/x86/kvm/pmu.c | 2 +-
arch/x86/kvm/vmx/pmu_intel.c | 2 +-
include/linux/perf_event.h | 1 +
kernel/events/core.c | 4 +++-
kernel/events/hw_breakpoint.c | 4 ++--
kernel/events/hw_breakpoint_test.c | 2 +-
kernel/watchdog_hld.c | 2 +-
9 files changed, 14 insertions(+), 11 deletions(-)
diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index 24908400e190..11c3386bc86b 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -624,7 +624,7 @@ static void kvm_pmu_create_perf_event(struct kvm_pmc *pmc)
attr.sample_period = compute_period(pmc, kvm_pmu_get_pmc_value(pmc));
- event = perf_event_create_kernel_counter(&attr, -1, current,
+ event = perf_event_create_kernel_counter(&attr, -1, current, NULL,
kvm_pmu_perf_overflow, pmc);
if (IS_ERR(event)) {
@@ -713,7 +713,7 @@ static struct arm_pmu *kvm_pmu_probe_armpmu(void)
attr.config = ARMV8_PMUV3_PERFCTR_CPU_CYCLES;
attr.sample_period = GENMASK(63, 0);
- event = perf_event_create_kernel_counter(&attr, -1, current,
+ event = perf_event_create_kernel_counter(&attr, -1, current, NULL,
kvm_pmu_perf_overflow, &attr);
if (IS_ERR(event)) {
diff --git a/arch/x86/kernel/cpu/resctrl/pseudo_lock.c b/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
index d961ae3ed96e..43e54bb200cd 100644
--- a/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
+++ b/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
@@ -952,12 +952,12 @@ static int measure_residency_fn(struct perf_event_attr *miss_attr,
u64 tmp;
miss_event = perf_event_create_kernel_counter(miss_attr, plr->cpu,
- NULL, NULL, NULL);
+ NULL, NULL, NULL, NULL);
if (IS_ERR(miss_event))
goto out;
hit_event = perf_event_create_kernel_counter(hit_attr, plr->cpu,
- NULL, NULL, NULL);
+ NULL, NULL, NULL, NULL);
if (IS_ERR(hit_event))
goto out_miss;
diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
index eb594620dd75..f6c8180241d7 100644
--- a/arch/x86/kvm/pmu.c
+++ b/arch/x86/kvm/pmu.c
@@ -204,7 +204,7 @@ static int pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type, u64 config,
attr.precise_ip = 3;
}
- event = perf_event_create_kernel_counter(&attr, -1, current,
+ event = perf_event_create_kernel_counter(&attr, -1, current, NULL,
kvm_perf_overflow, pmc);
if (IS_ERR(event)) {
pr_debug_ratelimited("kvm_pmu: event creation failed %ld for pmc->idx = %d\n",
diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
index f951dc756456..b746381307c7 100644
--- a/arch/x86/kvm/vmx/pmu_intel.c
+++ b/arch/x86/kvm/vmx/pmu_intel.c
@@ -299,7 +299,7 @@ int intel_pmu_create_guest_lbr_event(struct kvm_vcpu *vcpu)
}
event = perf_event_create_kernel_counter(&attr, -1,
- current, NULL, NULL);
+ current, NULL, NULL, NULL);
if (IS_ERR(event)) {
pr_debug_ratelimited("%s: failed %ld\n",
__func__, PTR_ERR(event));
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index 0031f7b4d9ab..5f34e1d0bff8 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -1023,6 +1023,7 @@ extern struct perf_event *
perf_event_create_kernel_counter(struct perf_event_attr *attr,
int cpu,
struct task_struct *task,
+ struct perf_event *group_leader,
perf_overflow_handler_t callback,
void *context);
extern void perf_pmu_migrate_context(struct pmu *pmu,
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 7f04f995c975..f671b1a9a691 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -12674,12 +12674,14 @@ SYSCALL_DEFINE5(perf_event_open,
* @attr: attributes of the counter to create
* @cpu: cpu in which the counter is bound
* @task: task to profile (NULL for percpu)
+ * @group_leader: event group leader
* @overflow_handler: callback to trigger when we hit the event
* @context: context data could be used in overflow_handler callback
*/
struct perf_event *
perf_event_create_kernel_counter(struct perf_event_attr *attr, int cpu,
struct task_struct *task,
+ struct perf_event *group_leader,
perf_overflow_handler_t overflow_handler,
void *context)
{
@@ -12694,7 +12696,7 @@ perf_event_create_kernel_counter(struct perf_event_attr *attr, int cpu,
if (attr->aux_output)
return ERR_PTR(-EINVAL);
- event = perf_event_alloc(attr, cpu, task, NULL, NULL,
+ event = perf_event_alloc(attr, cpu, task, group_leader, NULL,
overflow_handler, context, -1);
if (IS_ERR(event)) {
err = PTR_ERR(event);
diff --git a/kernel/events/hw_breakpoint.c b/kernel/events/hw_breakpoint.c
index c3797701339c..65b5b1421e62 100644
--- a/kernel/events/hw_breakpoint.c
+++ b/kernel/events/hw_breakpoint.c
@@ -771,7 +771,7 @@ register_user_hw_breakpoint(struct perf_event_attr *attr,
void *context,
struct task_struct *tsk)
{
- return perf_event_create_kernel_counter(attr, -1, tsk, triggered,
+ return perf_event_create_kernel_counter(attr, -1, tsk, NULL, triggered,
context);
}
EXPORT_SYMBOL_GPL(register_user_hw_breakpoint);
@@ -881,7 +881,7 @@ register_wide_hw_breakpoint(struct perf_event_attr *attr,
cpus_read_lock();
for_each_online_cpu(cpu) {
- bp = perf_event_create_kernel_counter(attr, cpu, NULL,
+ bp = perf_event_create_kernel_counter(attr, cpu, NULL, NULL,
triggered, context);
if (IS_ERR(bp)) {
err = PTR_ERR(bp);
diff --git a/kernel/events/hw_breakpoint_test.c b/kernel/events/hw_breakpoint_test.c
index c57610f52bb4..b3597df12284 100644
--- a/kernel/events/hw_breakpoint_test.c
+++ b/kernel/events/hw_breakpoint_test.c
@@ -39,7 +39,7 @@ static struct perf_event *register_test_bp(int cpu, struct task_struct *tsk, int
attr.bp_addr = (unsigned long)&break_vars[idx];
attr.bp_len = HW_BREAKPOINT_LEN_1;
attr.bp_type = HW_BREAKPOINT_RW;
- return perf_event_create_kernel_counter(&attr, cpu, tsk, NULL, NULL);
+ return perf_event_create_kernel_counter(&attr, cpu, tsk, NULL, NULL, NULL);
}
static void unregister_test_bp(struct perf_event **bp)
diff --git a/kernel/watchdog_hld.c b/kernel/watchdog_hld.c
index 247bf0b1582c..bb755dadba54 100644
--- a/kernel/watchdog_hld.c
+++ b/kernel/watchdog_hld.c
@@ -173,7 +173,7 @@ static int hardlockup_detector_event_create(void)
wd_attr->sample_period = hw_nmi_get_sample_period(watchdog_thresh);
/* Try to register using hardware perf events */
- evt = perf_event_create_kernel_counter(wd_attr, cpu, NULL,
+ evt = perf_event_create_kernel_counter(wd_attr, cpu, NULL, NULL,
watchdog_overflow_callback, NULL);
if (IS_ERR(evt)) {
pr_debug("Perf event create on CPU %d failed with %ld\n", cpu,
--
2.38.2
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH RFC 2/8] perf: x86/core: Expose the available number of the Topdown metrics
[not found] <20221212125844.41157-1-likexu@tencent.com>
2022-12-12 12:58 ` [PATCH RFC 1/8] perf/core: Add *group_leader to perf_event_create_kernel_counter() Like Xu
@ 2022-12-12 12:58 ` Like Xu
2022-12-12 12:58 ` [PATCH RFC 3/8] perf: x86/core: Snyc PERF_METRICS bit together with fixed counter3 Like Xu
2 siblings, 0 replies; 7+ messages in thread
From: Like Xu @ 2022-12-12 12:58 UTC (permalink / raw)
To: Peter Zijlstra, Sean Christopherson
Cc: Paolo Bonzini, linux-kernel, kvm, linux-perf-users
From: Like Xu <likexu@tencent.com>
Intel Sapphire Rapids server has 8 metrics events, while the Intel Ice Lake
only supports 4 metrics events. The available number of the Topdown
metrics are model specific without architecture hint.
To support guest Topdown metrics, KVM may only rely on the cpu model
to emulate the correct number of metrics event on the platforms. It would
be nice to have the perf core tell KVM the available number of Topdown
metrics, just like x86_pmu.num_counters.
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: linux-perf-users@vger.kernel.org
Signed-off-by: Like Xu <likexu@tencent.com>
---
arch/x86/events/core.c | 1 +
arch/x86/include/asm/perf_event.h | 1 +
2 files changed, 2 insertions(+)
diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index b30b8bbcd1e2..d0d84c7a6876 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -3006,6 +3006,7 @@ void perf_get_x86_pmu_capability(struct x86_pmu_capability *cap)
* which available for all cores.
*/
cap->num_counters_gp = x86_pmu.num_counters;
+ cap->num_topdown_events = x86_pmu.num_topdown_events;
cap->num_counters_fixed = x86_pmu.num_counters_fixed;
cap->bit_width_gp = x86_pmu.cntval_bits;
cap->bit_width_fixed = x86_pmu.cntval_bits;
diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
index 5d0f6891ae61..3e263d291595 100644
--- a/arch/x86/include/asm/perf_event.h
+++ b/arch/x86/include/asm/perf_event.h
@@ -219,6 +219,7 @@ struct x86_pmu_capability {
int version;
int num_counters_gp;
int num_counters_fixed;
+ int num_topdown_events;
int bit_width_gp;
int bit_width_fixed;
unsigned int events_mask;
--
2.38.2
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH RFC 3/8] perf: x86/core: Snyc PERF_METRICS bit together with fixed counter3
[not found] <20221212125844.41157-1-likexu@tencent.com>
2022-12-12 12:58 ` [PATCH RFC 1/8] perf/core: Add *group_leader to perf_event_create_kernel_counter() Like Xu
2022-12-12 12:58 ` [PATCH RFC 2/8] perf: x86/core: Expose the available number of the Topdown metrics Like Xu
@ 2022-12-12 12:58 ` Like Xu
2 siblings, 0 replies; 7+ messages in thread
From: Like Xu @ 2022-12-12 12:58 UTC (permalink / raw)
To: Peter Zijlstra, Sean Christopherson
Cc: Paolo Bonzini, linux-kernel, kvm, linux-perf-users
From: Like Xu <likexu@tencent.com>
When the guest uses topdown (the fixed counter 3 and perf_metrics msr),
the sharing rule on the PERF_METRICS bit on the GLOBAL_CTRL msr does
not change, that is, it should be updated synchronously with the fixed
counter 3. Considering that guest topdown feature has just been enabled,
this is not a strictly bug fix.
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: linux-perf-users@vger.kernel.org
Signed-off-by: Like Xu <likexu@tencent.com>
---
arch/x86/events/intel/core.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 1b92bf05fd65..e7897fd9f7ab 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -2436,6 +2436,8 @@ static void intel_pmu_disable_fixed(struct perf_event *event)
*/
if (*(u64 *)cpuc->active_mask & INTEL_PMC_OTHER_TOPDOWN_BITS(idx))
return;
+
+ intel_clear_masks(event, GLOBAL_CTRL_EN_PERF_METRICS);
idx = INTEL_PMC_IDX_FIXED_SLOTS;
}
@@ -2729,6 +2731,7 @@ static void intel_pmu_enable_fixed(struct perf_event *event)
if (*(u64 *)cpuc->active_mask & INTEL_PMC_OTHER_TOPDOWN_BITS(idx))
return;
+ intel_set_masks(event, GLOBAL_CTRL_EN_PERF_METRICS);
idx = INTEL_PMC_IDX_FIXED_SLOTS;
}
--
2.38.2
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH RFC 1/8] perf/core: Add *group_leader to perf_event_create_kernel_counter()
2022-12-12 12:58 ` [PATCH RFC 1/8] perf/core: Add *group_leader to perf_event_create_kernel_counter() Like Xu
@ 2022-12-12 13:23 ` Marc Zyngier
2022-12-15 13:11 ` Like Xu
2022-12-14 3:52 ` Ravi Bangoria
1 sibling, 1 reply; 7+ messages in thread
From: Marc Zyngier @ 2022-12-12 13:23 UTC (permalink / raw)
To: Like Xu
Cc: Peter Zijlstra, Sean Christopherson, Paolo Bonzini, linux-kernel,
kvm, Fenghua Yu, kvmarm, linux-perf-users
On Mon, 12 Dec 2022 12:58:37 +0000,
Like Xu <like.xu.linux@gmail.com> wrote:
>
> From: Like Xu <likexu@tencent.com>
>
> Like syscalls users, kernel-space perf_event creators may also use group
> counters abstraction to gain pmu functionalities, and an in-kernel counter
> groups behave much like normal 'single' counters, following the group
> semantics-based behavior.
>
> No functional changes at this time. An example will be that KVM creates
> Intel slot event as group leader and other topdown metric events to emulate
> MSR_PERF_METRICS pmu capability for guests.
>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: Fenghua Yu <fenghua.yu@intel.com>
> Cc: kvmarm@lists.linux.dev
> Cc: linux-perf-users@vger.kernel.org
> Signed-off-by: Like Xu <likexu@tencent.com>
> ---
> arch/arm64/kvm/pmu-emul.c | 4 ++--
> arch/x86/kernel/cpu/resctrl/pseudo_lock.c | 4 ++--
> arch/x86/kvm/pmu.c | 2 +-
> arch/x86/kvm/vmx/pmu_intel.c | 2 +-
> include/linux/perf_event.h | 1 +
> kernel/events/core.c | 4 +++-
> kernel/events/hw_breakpoint.c | 4 ++--
> kernel/events/hw_breakpoint_test.c | 2 +-
> kernel/watchdog_hld.c | 2 +-
> 9 files changed, 14 insertions(+), 11 deletions(-)
>
> diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> index 24908400e190..11c3386bc86b 100644
> --- a/arch/arm64/kvm/pmu-emul.c
> +++ b/arch/arm64/kvm/pmu-emul.c
> @@ -624,7 +624,7 @@ static void kvm_pmu_create_perf_event(struct kvm_pmc *pmc)
>
> attr.sample_period = compute_period(pmc, kvm_pmu_get_pmc_value(pmc));
>
> - event = perf_event_create_kernel_counter(&attr, -1, current,
> + event = perf_event_create_kernel_counter(&attr, -1, current, NULL,
> kvm_pmu_perf_overflow, pmc);
Wouldn't it be better to have a separate helper that takes the group leader
as a parameter, and reimplement perf_event_create_kernel_counter() in term
of this helper?
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH RFC 1/8] perf/core: Add *group_leader to perf_event_create_kernel_counter()
2022-12-12 12:58 ` [PATCH RFC 1/8] perf/core: Add *group_leader to perf_event_create_kernel_counter() Like Xu
2022-12-12 13:23 ` Marc Zyngier
@ 2022-12-14 3:52 ` Ravi Bangoria
2022-12-15 13:36 ` Like Xu
1 sibling, 1 reply; 7+ messages in thread
From: Ravi Bangoria @ 2022-12-14 3:52 UTC (permalink / raw)
To: Like Xu
Cc: Peter Zijlstra, Sean Christopherson, Paolo Bonzini, linux-kernel,
kvm, Marc Zyngier, Fenghua Yu, kvmarm, linux-perf-users,
Ravi Bangoria
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index 7f04f995c975..f671b1a9a691 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -12674,12 +12674,14 @@ SYSCALL_DEFINE5(perf_event_open,
> * @attr: attributes of the counter to create
> * @cpu: cpu in which the counter is bound
> * @task: task to profile (NULL for percpu)
> + * @group_leader: event group leader
> * @overflow_handler: callback to trigger when we hit the event
> * @context: context data could be used in overflow_handler callback
> */
> struct perf_event *
> perf_event_create_kernel_counter(struct perf_event_attr *attr, int cpu,
> struct task_struct *task,
> + struct perf_event *group_leader,
> perf_overflow_handler_t overflow_handler,
> void *context)
> {
> @@ -12694,7 +12696,7 @@ perf_event_create_kernel_counter(struct perf_event_attr *attr, int cpu,
> if (attr->aux_output)
> return ERR_PTR(-EINVAL);
>
> - event = perf_event_alloc(attr, cpu, task, NULL, NULL,
> + event = perf_event_alloc(attr, cpu, task, group_leader, NULL,
> overflow_handler, context, -1);
Grouping involves lot of complexities. Setting group_leader won't be sufficient.
Please see perf_event_open() syscall code for more detail.
Thanks,
Ravi
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH RFC 1/8] perf/core: Add *group_leader to perf_event_create_kernel_counter()
2022-12-12 13:23 ` Marc Zyngier
@ 2022-12-15 13:11 ` Like Xu
0 siblings, 0 replies; 7+ messages in thread
From: Like Xu @ 2022-12-15 13:11 UTC (permalink / raw)
To: Marc Zyngier
Cc: Peter Zijlstra, Sean Christopherson, Paolo Bonzini, linux-kernel,
kvm, Fenghua Yu, kvmarm, linux-perf-users
On 12/12/2022 9:23 pm, Marc Zyngier wrote:
> On Mon, 12 Dec 2022 12:58:37 +0000,
> Like Xu <like.xu.linux@gmail.com> wrote:
>>
>> From: Like Xu <likexu@tencent.com>
>>
>> Like syscalls users, kernel-space perf_event creators may also use group
>> counters abstraction to gain pmu functionalities, and an in-kernel counter
>> groups behave much like normal 'single' counters, following the group
>> semantics-based behavior.
>>
>> No functional changes at this time. An example will be that KVM creates
>> Intel slot event as group leader and other topdown metric events to emulate
>> MSR_PERF_METRICS pmu capability for guests.
>>
>> Cc: Peter Zijlstra <peterz@infradead.org>
>> Cc: Marc Zyngier <maz@kernel.org>
>> Cc: Fenghua Yu <fenghua.yu@intel.com>
>> Cc: kvmarm@lists.linux.dev
>> Cc: linux-perf-users@vger.kernel.org
>> Signed-off-by: Like Xu <likexu@tencent.com>
>> ---
>> arch/arm64/kvm/pmu-emul.c | 4 ++--
>> arch/x86/kernel/cpu/resctrl/pseudo_lock.c | 4 ++--
>> arch/x86/kvm/pmu.c | 2 +-
>> arch/x86/kvm/vmx/pmu_intel.c | 2 +-
>> include/linux/perf_event.h | 1 +
>> kernel/events/core.c | 4 +++-
>> kernel/events/hw_breakpoint.c | 4 ++--
>> kernel/events/hw_breakpoint_test.c | 2 +-
>> kernel/watchdog_hld.c | 2 +-
>> 9 files changed, 14 insertions(+), 11 deletions(-)
>>
>> diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
>> index 24908400e190..11c3386bc86b 100644
>> --- a/arch/arm64/kvm/pmu-emul.c
>> +++ b/arch/arm64/kvm/pmu-emul.c
>> @@ -624,7 +624,7 @@ static void kvm_pmu_create_perf_event(struct kvm_pmc *pmc)
>>
>> attr.sample_period = compute_period(pmc, kvm_pmu_get_pmc_value(pmc));
>>
>> - event = perf_event_create_kernel_counter(&attr, -1, current,
>> + event = perf_event_create_kernel_counter(&attr, -1, current, NULL,
>> kvm_pmu_perf_overflow, pmc);
>
> Wouldn't it be better to have a separate helper that takes the group leader
> as a parameter, and reimplement perf_event_create_kernel_counter() in term
> of this helper?
>
> M.
>
Applied. It makes changes more concise, thank you.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH RFC 1/8] perf/core: Add *group_leader to perf_event_create_kernel_counter()
2022-12-14 3:52 ` Ravi Bangoria
@ 2022-12-15 13:36 ` Like Xu
0 siblings, 0 replies; 7+ messages in thread
From: Like Xu @ 2022-12-15 13:36 UTC (permalink / raw)
To: Ravi Bangoria, Peter Zijlstra
Cc: Sean Christopherson, Paolo Bonzini, linux-kernel, kvm,
Marc Zyngier, Fenghua Yu, kvmarm, linux-perf-users
On 14/12/2022 11:52 am, Ravi Bangoria wrote:
>> diff --git a/kernel/events/core.c b/kernel/events/core.c
>> index 7f04f995c975..f671b1a9a691 100644
>> --- a/kernel/events/core.c
>> +++ b/kernel/events/core.c
>> @@ -12674,12 +12674,14 @@ SYSCALL_DEFINE5(perf_event_open,
>> * @attr: attributes of the counter to create
>> * @cpu: cpu in which the counter is bound
>> * @task: task to profile (NULL for percpu)
>> + * @group_leader: event group leader
>> * @overflow_handler: callback to trigger when we hit the event
>> * @context: context data could be used in overflow_handler callback
>> */
>> struct perf_event *
>> perf_event_create_kernel_counter(struct perf_event_attr *attr, int cpu,
>> struct task_struct *task,
>> + struct perf_event *group_leader,
>> perf_overflow_handler_t overflow_handler,
>> void *context)
>> {
>> @@ -12694,7 +12696,7 @@ perf_event_create_kernel_counter(struct perf_event_attr *attr, int cpu,
>> if (attr->aux_output)
>> return ERR_PTR(-EINVAL);
>>
>> - event = perf_event_alloc(attr, cpu, task, NULL, NULL,
>> + event = perf_event_alloc(attr, cpu, task, group_leader, NULL,
>> overflow_handler, context, -1);
>
> Grouping involves lot of complexities. Setting group_leader won't be sufficient.
> Please see perf_event_open() syscall code for more detail.
>
> Thanks,
> Ravi
This is the main reason why the RFC tag is added. More detailed professional
reviews is encouraged.
AFAI, there does exist a number of code gaps here to support grouped events in
the kernel,
but there are also opportunities, as there may be other new use cases bringing
innovation.
I have to confirm this idea with maintainers first, the alternative is to create
yet another special
perf_event similar to PMC_IDX_FIXED_VLBR, which schedules perf_metrics MSR for
KVM stuff.
PeterZ, any input ?
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2022-12-15 13:36 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20221212125844.41157-1-likexu@tencent.com>
2022-12-12 12:58 ` [PATCH RFC 1/8] perf/core: Add *group_leader to perf_event_create_kernel_counter() Like Xu
2022-12-12 13:23 ` Marc Zyngier
2022-12-15 13:11 ` Like Xu
2022-12-14 3:52 ` Ravi Bangoria
2022-12-15 13:36 ` Like Xu
2022-12-12 12:58 ` [PATCH RFC 2/8] perf: x86/core: Expose the available number of the Topdown metrics Like Xu
2022-12-12 12:58 ` [PATCH RFC 3/8] perf: x86/core: Snyc PERF_METRICS bit together with fixed counter3 Like Xu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).