From: Mark Rutland <mark.rutland@arm.com>
To: David Carrillo-Cisneros <davidcc@google.com>
Cc: linux-kernel@vger.kernel.org, "x86@kernel.org" <x86@kernel.org>,
Ingo Molnar <mingo@redhat.com>,
Thomas Gleixner <tglx@linutronix.de>,
Andi Kleen <ak@linux.intel.com>, Kan Liang <kan.liang@intel.com>,
Peter Zijlstra <peterz@infradead.org>,
Borislav Petkov <bp@suse.de>,
Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>,
Dave Hansen <dave.hansen@linux.intel.com>,
Vikas Shivappa <vikas.shivappa@linux.intel.com>,
Arnaldo Carvalho de Melo <acme@kernel.org>,
Vince Weaver <vince@deater.net>, Paul Turner <pjt@google.com>,
Stephane Eranian <eranian@google.com>
Subject: Re: [PATCH 1/2] perf/core: Make cgroup switch visit only cpuctxs with cgroup events
Date: Wed, 18 Jan 2017 12:11:13 +0000 [thread overview]
Message-ID: <20170118121113.GC3231@leverpostej> (raw)
In-Reply-To: <20170117173840.10614-2-davidcc@google.com>
On Tue, Jan 17, 2017 at 09:38:39AM -0800, David Carrillo-Cisneros wrote:
> This is a low-hanging fruit optimization. It replaces the iteration over
> the "pmus" list in cgroup switch by an iteration over a new list that
> contains only cpuctxs with at least one cgroup event.
>
> This is necessary because the number of pmus have increased over the years
> e.g modern x86 server systems have well above 50 pmus.
> The iteration over the full pmu list is unneccessary and can be costly in
> heavy cache contention scenarios.
While I haven't done any measurement of the overhead, this looks like a
nice rework/cleanup.
Since this is only changing the management of cpu contexts, this
shouldn't adversely affect systems with heterogeneous CPUs. I've also
given this a spin on such a system, to no ill effect.
I have one (very minor) comment below, but either way:
Acked-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
> @@ -889,6 +876,7 @@ list_update_cgroup_event(struct perf_event *event,
> struct perf_event_context *ctx, bool add)
> {
> struct perf_cpu_context *cpuctx;
> + struct list_head *lentry;
It might be worth calling this cpuctx_entry, so that it's clear which
list element it refers to. I can imagine we'll add more list
manipulation in this path in future.
Thanks,
Mark.
next prev parent reply other threads:[~2017-01-18 12:12 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-01-17 17:38 [PATCH 0/2] Optimize cgroup ctx switch and remove cpuctx->unique_pmu David Carrillo-Cisneros
2017-01-17 17:38 ` [PATCH 1/2] perf/core: Make cgroup switch visit only cpuctxs with cgroup events David Carrillo-Cisneros
2017-01-18 12:11 ` Mark Rutland [this message]
2017-01-18 19:26 ` David Carrillo-Cisneros
2017-01-17 17:38 ` [PATCH 2/2] perf/core: Remove perf_cpu_context::unique_pmu David Carrillo-Cisneros
2017-01-18 12:25 ` Mark Rutland
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170118121113.GC3231@leverpostej \
--to=mark.rutland@arm.com \
--cc=acme@kernel.org \
--cc=ak@linux.intel.com \
--cc=bp@suse.de \
--cc=dave.hansen@linux.intel.com \
--cc=davidcc@google.com \
--cc=eranian@google.com \
--cc=kan.liang@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=pjt@google.com \
--cc=srinivas.pandruvada@linux.intel.com \
--cc=tglx@linutronix.de \
--cc=vikas.shivappa@linux.intel.com \
--cc=vince@deater.net \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox