From: Chengming Zhou <zhouchengming@bytedance.com>
To: peterz@infradead.org, mingo@redhat.com, acme@kernel.org,
mark.rutland@arm.com, alexander.shishkin@linux.intel.com,
jolsa@kernel.org, namhyung@kernel.org, eranian@google.com
Cc: linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org,
duanxiongchun@bytedance.com, songmuchun@bytedance.com,
Chengming Zhou <zhouchengming@bytedance.com>
Subject: [PATCH v3 3/5] perf/core: Don't need event_filter_match in merge_sched_in()
Date: Fri, 25 Mar 2022 11:53:16 +0800 [thread overview]
Message-ID: <20220325035318.42168-4-zhouchengming@bytedance.com> (raw)
In-Reply-To: <20220325035318.42168-1-zhouchengming@bytedance.com>
There is one obselete comment in perf_cgroup_switch(), since
we don't use event_filter_match() when event_sched_out().
Then found we needn't to use event_filter_match() in
merge_sched_in() too. Because now we use the perf_event groups
RB-tree to get the exact matched perf_events, don't need to
go through the event_filter_match() to check if matched again.
Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
---
kernel/events/core.c | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c
index dd985c77bc37..225d408deb1a 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -856,7 +856,8 @@ static void perf_cgroup_switch(struct task_struct *task, int mode)
cpu_ctx_sched_out(cpuctx, EVENT_ALL);
/*
* must not be done before ctxswout due
- * to event_filter_match() in event_sched_out()
+ * to update_cgrp_time_from_cpuctx() in
+ * ctx_sched_out()
*/
cpuctx->cgrp = NULL;
}
@@ -3804,9 +3805,6 @@ static int merge_sched_in(struct perf_event *event, void *data)
if (event->state <= PERF_EVENT_STATE_OFF)
return 0;
- if (!event_filter_match(event))
- return 0;
-
if (group_can_go_on(event, cpuctx, *can_add_hw)) {
if (!group_sched_in(event, cpuctx, ctx))
list_add_tail(&event->active_list, get_event_list(event));
--
2.20.1
next prev parent reply other threads:[~2022-03-25 3:54 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-03-25 3:53 [PATCH v3 0/5] perf/core: Fixes and cleanup for cgroup events Chengming Zhou
2022-03-25 3:53 ` [PATCH v3 1/5] perf/core: Don't pass task around when ctx sched in Chengming Zhou
2022-03-25 3:53 ` [PATCH v3 2/5] perf/core: Use perf_cgroup_info->active to check if cgroup is active Chengming Zhou
2022-03-25 3:53 ` Chengming Zhou [this message]
2022-03-25 15:11 ` [PATCH v3 3/5] perf/core: Don't need event_filter_match in merge_sched_in() Liang, Kan
2022-03-25 15:45 ` [External] " Chengming Zhou
2022-03-25 3:53 ` [PATCH v3 4/5] perf/core: Fix perf_cgroup_switch() Chengming Zhou
2022-03-25 3:53 ` [PATCH v3 5/5] perf/core: Always set cpuctx cgrp when enable cgroup event Chengming Zhou
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220325035318.42168-4-zhouchengming@bytedance.com \
--to=zhouchengming@bytedance.com \
--cc=acme@kernel.org \
--cc=alexander.shishkin@linux.intel.com \
--cc=duanxiongchun@bytedance.com \
--cc=eranian@google.com \
--cc=jolsa@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-perf-users@vger.kernel.org \
--cc=mark.rutland@arm.com \
--cc=mingo@redhat.com \
--cc=namhyung@kernel.org \
--cc=peterz@infradead.org \
--cc=songmuchun@bytedance.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).