From: Peter Zijlstra <peterz@infradead.org>
To: Song Liu <songliubraving@fb.com>
Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, tj@kernel.org,
jolsa@kernel.org
Subject: Re: [RFC 2/2] perf: Sharing PMU counters across compatible events
Date: Mon, 28 May 2018 13:33:19 +0200 [thread overview]
Message-ID: <20180528113319.GD3452@worktop.programming.kicks-ass.net> (raw)
In-Reply-To: <20180504231102.2850679-3-songliubraving@fb.com>
On Fri, May 04, 2018 at 04:11:02PM -0700, Song Liu wrote:
> +static void add_event_to_dup_event_list(struct perf_event *event,
> + struct perf_cpu_context *cpuctx)
> +{
> + int i;
> +
> + for (i = 0; i < cpuctx->dup_event_count; ++i)
> + if (memcmp(&event->attr,
> + &cpuctx->dup_event_list[i].first->attr,
> + sizeof(event->attr)) == 0) {
> + event->dup_id = i;
> + return;
> + }
> + i = cpuctx->dup_event_count++;
> + cpuctx->dup_event_list[i].first = event;
> + cpuctx->dup_event_list[i].master = NULL;
> + INIT_LIST_HEAD(&cpuctx->dup_event_list[i].active_dup);
> + event->dup_id = i;
> + INIT_LIST_HEAD(&event->dup_sibling_entry);
> +}
> +
> +static int add_group_to_dup_event_list(struct perf_event *event, void *data)
> +{
> + struct sched_in_data *sid = data;
> + struct perf_event *sibling;
> +
> + add_event_to_dup_event_list(event, sid->cpuctx);
> + for_each_sibling_event(sibling, event)
> + add_event_to_dup_event_list(sibling, sid->cpuctx);
> +
> + return 0;
> +}
> +
> +static void rebuild_event_dup_list(struct perf_cpu_context *cpuctx)
> +{
> + int dup_count = cpuctx->ctx.nr_events;
> + struct perf_event_context *ctx = cpuctx->task_ctx;
> + struct sched_in_data sid = {
> + .ctx = ctx,
> + .cpuctx = cpuctx,
> + .can_add_hw = 1,
> + };
> +
> + if (ctx)
> + dup_count += ctx->nr_events;
> +
> + kfree(cpuctx->dup_event_list);
> + cpuctx->dup_event_count = 0;
> +
> + cpuctx->dup_event_list =
> + kzalloc(sizeof(struct perf_event_dup) * dup_count, GFP_ATOMIC);
> + if (!cpuctx->dup_event_list)
> + return;
> +
> + visit_groups_merge(&cpuctx->ctx.pinned_groups, smp_processor_id(),
> + add_group_to_dup_event_list, &sid);
> + visit_groups_merge(&cpuctx->ctx.flexible_groups, smp_processor_id(),
> + add_group_to_dup_event_list, &sid);
> + if (ctx) {
> + visit_groups_merge(&ctx->pinned_groups, smp_processor_id(),
> + add_group_to_dup_event_list, &sid);
> + visit_groups_merge(&ctx->flexible_groups, smp_processor_id(),
> + add_group_to_dup_event_list, &sid);
> + }
> +}
Oooh, wait a second, this isn't O(n), this looks like O(n^2).
We do that linear search for every single event... that's not good.
prev parent reply other threads:[~2018-05-28 12:39 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-05-04 23:11 [RFC 0/2] perf: Sharing PMU counters across compatible events Song Liu
2018-05-04 23:11 ` [RFC 1/2] perf: add move_dup() for PMU sharing Song Liu
2018-05-04 23:11 ` [RFC 2/2] perf: Sharing PMU counters across compatible events Song Liu
2018-05-28 11:15 ` Peter Zijlstra
2018-05-28 18:24 ` Song Liu
2018-05-28 11:24 ` Peter Zijlstra
2018-05-28 18:19 ` Song Liu
2018-05-28 11:26 ` Peter Zijlstra
2018-05-28 11:33 ` Peter Zijlstra [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180528113319.GD3452@worktop.programming.kicks-ass.net \
--to=peterz@infradead.org \
--cc=jolsa@kernel.org \
--cc=kernel-team@fb.com \
--cc=linux-kernel@vger.kernel.org \
--cc=songliubraving@fb.com \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox