From: Mark Rutland <mark.rutland@arm.com>
To: Andi Kleen <andi@firstfloor.org>
Cc: Kan Liang <kan.liang@intel.com>,
"acme@kernel.org" <acme@kernel.org>,
"a.p.zijlstra@chello.nl" <a.p.zijlstra@chello.nl>,
"eranian@google.com" <eranian@google.com>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH V2 1/6] perf,core: allow invalid context events to be part of sw/hw groups
Date: Fri, 17 Apr 2015 10:47:48 +0100 [thread overview]
Message-ID: <20150417094746.GA21655@leverpostej> (raw)
In-Reply-To: <20150416212342.GW2366@two.firstfloor.org>
On Thu, Apr 16, 2015 at 10:23:42PM +0100, Andi Kleen wrote:
> > From my PoV that makes sense. One is CPU-affine, the other is not, and
> > the two cannot be scheduled in the same PMU transaction by the nature of
> > the hardware. Fundamentally, you cannot provide group semantics due to
> > this.
>
> Actually you can. Just use it like a free running counter, and the
> different groups sample it. This will work from the different CPUs,
> as long as the event is the same everywhere.
... which would give you arbitrary skew, because one counter is
free-running and the other is not (when scheduling a context in or out we stop
the PMU)
>From my PoV that violates group semantics, because now the events aren't
always counting at the same time (which would be the reason I grouped
them in the first place).
> The implemention may not be quite right yet, but the basic concept
> should work, and is useful.
I can see that associating counts from different PMUs at points in time
may be useful, even if they aren't sampled at the precise same time, and
you have weaker guarantees than the current group semantics.
However, it is the case that you cannot offer group semantics.
Mark.
next prev parent reply other threads:[~2015-04-17 9:47 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-04-15 7:56 [PATCH V2 1/6] perf,core: allow invalid context events to be part of sw/hw groups Kan Liang
2015-04-15 7:56 ` [PATCH V2 2/6] perf evsel: Set evsel->cpus to the evlist->cpus when not constrained Kan Liang
2015-04-15 7:56 ` [PATCH V2 3/6] perf,tools: get real cpu id for print_aggr Kan Liang
2015-04-15 7:56 ` [PATCH V2 4/6] perf,tools: check and re-organize evsel cpu maps Kan Liang
2015-04-16 16:33 ` Mark Rutland
2015-04-15 7:56 ` [PATCH V2 5/6] perf,tools: open/mmap event uses event's cpu map Kan Liang
2015-04-15 7:56 ` [PATCH V2 6/6] perf/x86/intel/uncore: do not implicitly set uncore event cpu Kan Liang
2015-04-16 16:36 ` Mark Rutland
2015-04-15 16:15 ` [PATCH V2 1/6] perf,core: allow invalid context events to be part of sw/hw groups Peter Zijlstra
2015-04-15 16:21 ` Andi Kleen
2015-04-15 16:55 ` Peter Zijlstra
2015-04-15 17:28 ` Peter Zijlstra
2015-04-16 14:53 ` Liang, Kan
2015-04-16 16:31 ` Mark Rutland
2015-04-16 16:43 ` Mark Rutland
2015-04-16 17:16 ` Mark Rutland
2015-04-16 21:23 ` Andi Kleen
2015-04-17 9:47 ` Mark Rutland [this message]
2015-04-18 0:47 ` Andi Kleen
2015-04-20 10:15 ` Mark Rutland
2015-04-20 16:57 ` Andi Kleen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150417094746.GA21655@leverpostej \
--to=mark.rutland@arm.com \
--cc=a.p.zijlstra@chello.nl \
--cc=acme@kernel.org \
--cc=andi@firstfloor.org \
--cc=eranian@google.com \
--cc=kan.liang@intel.com \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox