From: Dapeng Mi <dapeng1.mi@linux.intel.com>
To: Sean Christopherson <seanjc@google.com>,
Paolo Bonzini <pbonzini@redhat.com>,
Peter Zijlstra <peterz@infradead.org>,
Arnaldo Carvalho de Melo <acme@kernel.org>,
Kan Liang <kan.liang@linux.intel.com>,
Like Xu <likexu@tencent.com>, Mark Rutland <mark.rutland@arm.com>,
Alexander Shishkin <alexander.shishkin@linux.intel.com>,
Jiri Olsa <jolsa@kernel.org>, Namhyung Kim <namhyung@kernel.org>,
Ian Rogers <irogers@google.com>,
Adrian Hunter <adrian.hunter@intel.com>
Cc: kvm@vger.kernel.org, linux-perf-users@vger.kernel.org,
linux-kernel@vger.kernel.org,
Zhenyu Wang <zhenyuw@linux.intel.com>,
Zhang Xiong <xiong.y.zhang@intel.com>,
Lv Zhiyuan <zhiyuan.lv@intel.com>,
Yang Weijiang <weijiang.yang@intel.com>,
Dapeng Mi <dapeng1.mi@intel.com>,
Dapeng Mi <dapeng1.mi@linux.intel.com>
Subject: [PATCH RFC v3 04/13] perf/core: Add function perf_event_move_group()
Date: Tue, 22 Aug 2023 13:11:31 +0800 [thread overview]
Message-ID: <20230822051140.512879-5-dapeng1.mi@linux.intel.com> (raw)
In-Reply-To: <20230822051140.512879-1-dapeng1.mi@linux.intel.com>
Extract the group moving code in function sys_perf_event_open() to create
a new function perf_event_move_group().
The subsequent change would add a new function
perf_event_create_group_kernel_counters() which is used to create group
events in kernel space. The function also needs to do same group moving
for group leader event just like function sys_perf_event_open() does. So
extract the moving code into a separate function to avoid the code
duplication.
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
---
kernel/events/core.c | 82 ++++++++++++++++++++++++--------------------
1 file changed, 45 insertions(+), 37 deletions(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 616391158d7c..15eb82d1a010 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -12399,6 +12399,48 @@ static int perf_event_group_leader_check(struct perf_event *group_leader,
return 0;
}
+static void perf_event_move_group(struct perf_event *group_leader,
+ struct perf_event_pmu_context *pmu_ctx,
+ struct perf_event_context *ctx)
+{
+ struct perf_event *sibling;
+
+ perf_remove_from_context(group_leader, 0);
+ put_pmu_ctx(group_leader->pmu_ctx);
+
+ for_each_sibling_event(sibling, group_leader) {
+ perf_remove_from_context(sibling, 0);
+ put_pmu_ctx(sibling->pmu_ctx);
+ }
+
+ /*
+ * Install the group siblings before the group leader.
+ *
+ * Because a group leader will try and install the entire group
+ * (through the sibling list, which is still in-tact), we can
+ * end up with siblings installed in the wrong context.
+ *
+ * By installing siblings first we NO-OP because they're not
+ * reachable through the group lists.
+ */
+ for_each_sibling_event(sibling, group_leader) {
+ sibling->pmu_ctx = pmu_ctx;
+ get_pmu_ctx(pmu_ctx);
+ perf_event__state_init(sibling);
+ perf_install_in_context(ctx, sibling, sibling->cpu);
+ }
+
+ /*
+ * Removing from the context ends up with disabled
+ * event. What we want here is event in the initial
+ * startup state, ready to be add into new context.
+ */
+ group_leader->pmu_ctx = pmu_ctx;
+ get_pmu_ctx(pmu_ctx);
+ perf_event__state_init(group_leader);
+ perf_install_in_context(ctx, group_leader, group_leader->cpu);
+}
+
/**
* sys_perf_event_open - open a performance event, associate it to a task/cpu
*
@@ -12414,7 +12456,7 @@ SYSCALL_DEFINE5(perf_event_open,
{
struct perf_event *group_leader = NULL, *output_event = NULL;
struct perf_event_pmu_context *pmu_ctx;
- struct perf_event *event, *sibling;
+ struct perf_event *event;
struct perf_event_attr attr;
struct perf_event_context *ctx;
struct file *event_file = NULL;
@@ -12646,42 +12688,8 @@ SYSCALL_DEFINE5(perf_event_open,
* where we start modifying current state.
*/
- if (move_group) {
- perf_remove_from_context(group_leader, 0);
- put_pmu_ctx(group_leader->pmu_ctx);
-
- for_each_sibling_event(sibling, group_leader) {
- perf_remove_from_context(sibling, 0);
- put_pmu_ctx(sibling->pmu_ctx);
- }
-
- /*
- * Install the group siblings before the group leader.
- *
- * Because a group leader will try and install the entire group
- * (through the sibling list, which is still in-tact), we can
- * end up with siblings installed in the wrong context.
- *
- * By installing siblings first we NO-OP because they're not
- * reachable through the group lists.
- */
- for_each_sibling_event(sibling, group_leader) {
- sibling->pmu_ctx = pmu_ctx;
- get_pmu_ctx(pmu_ctx);
- perf_event__state_init(sibling);
- perf_install_in_context(ctx, sibling, sibling->cpu);
- }
-
- /*
- * Removing from the context ends up with disabled
- * event. What we want here is event in the initial
- * startup state, ready to be add into new context.
- */
- group_leader->pmu_ctx = pmu_ctx;
- get_pmu_ctx(pmu_ctx);
- perf_event__state_init(group_leader);
- perf_install_in_context(ctx, group_leader, group_leader->cpu);
- }
+ if (move_group)
+ perf_event_move_group(group_leader, pmu_ctx, ctx);
/*
* Precalculate sample_data sizes; do while holding ctx::mutex such
--
2.34.1
next prev parent reply other threads:[~2023-08-22 5:04 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-08-22 5:11 [PATCH RFC v3 00/13] Enable fixed counter 3 and topdown perf metrics for vPMU Dapeng Mi
2023-08-22 5:11 ` [PATCH RFC v3 01/13] KVM: x86/pmu: Add Intel CPUID-hinted TopDown slots event Dapeng Mi
2023-08-22 5:11 ` [PATCH RFC v3 02/13] KVM: x86/pmu: Support PMU fixed counter 3 Dapeng Mi
2023-08-22 5:11 ` [PATCH RFC v3 03/13] perf/core: Add function perf_event_group_leader_check() Dapeng Mi
2023-08-22 5:11 ` Dapeng Mi [this message]
2023-08-22 5:11 ` [PATCH RFC v3 05/13] perf/core: Add *group_leader for perf_event_create_group_kernel_counters() Dapeng Mi
2023-08-22 5:11 ` [PATCH RFC v3 06/13] perf/x86: Fix typos and inconsistent indents in perf_event header Dapeng Mi
2023-08-22 5:11 ` [PATCH RFC v3 07/13] perf/x86: Add constraint for guest perf metrics event Dapeng Mi
2023-08-22 5:11 ` [PATCH RFC v3 08/13] perf/core: Add new function perf_event_topdown_metrics() Dapeng Mi
2023-08-22 5:11 ` [PATCH RFC v3 09/13] perf/x86/intel: Handle KVM virtual metrics event in perf system Dapeng Mi
2023-08-22 5:11 ` [PATCH RFC v3 10/13] KVM: x86/pmu: Extend pmc_reprogram_counter() to create group events Dapeng Mi
2023-08-22 5:11 ` [PATCH RFC v3 11/13] KVM: x86/pmu: Support topdown perf metrics feature Dapeng Mi
2023-08-22 5:11 ` [PATCH RFC v3 12/13] KVM: x86/pmu: Handle PERF_METRICS overflow Dapeng Mi
2023-08-22 5:11 ` [PATCH RFC v3 13/13] KVM: x86/pmu: Expose Topdown in MSR_IA32_PERF_CAPABILITIES Dapeng Mi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230822051140.512879-5-dapeng1.mi@linux.intel.com \
--to=dapeng1.mi@linux.intel.com \
--cc=acme@kernel.org \
--cc=adrian.hunter@intel.com \
--cc=alexander.shishkin@linux.intel.com \
--cc=dapeng1.mi@intel.com \
--cc=irogers@google.com \
--cc=jolsa@kernel.org \
--cc=kan.liang@linux.intel.com \
--cc=kvm@vger.kernel.org \
--cc=likexu@tencent.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-perf-users@vger.kernel.org \
--cc=mark.rutland@arm.com \
--cc=namhyung@kernel.org \
--cc=pbonzini@redhat.com \
--cc=peterz@infradead.org \
--cc=seanjc@google.com \
--cc=weijiang.yang@intel.com \
--cc=xiong.y.zhang@intel.com \
--cc=zhenyuw@linux.intel.com \
--cc=zhiyuan.lv@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).