public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Tejun Heo <tj@kernel.org>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>,
	Arnaldo Carvalho de Melo <acme@kernel.org>,
	Alexander Shishkin <alexander.shishkin@linux.intel.com>,
	Jiri Olsa <jolsa@redhat.com>, Namhyung Kim <namhyung@kernel.org>,
	linux-kernel@vger.kernel.org, kernel-team@fb.com,
	alexey.budankov@linux.intel.com
Subject: Re: [RFC] Sharing PMU counters across compatible events
Date: Mon, 11 Dec 2017 07:47:44 -0800	[thread overview]
Message-ID: <20171211154744.GK2421075@devbig577.frc2.facebook.com> (raw)
In-Reply-To: <20171206123500.nl5pixkkmc5joacq@hirez.programming.kicks-ass.net>

Hello, Peter.

On Wed, Dec 06, 2017 at 01:35:00PM +0100, Peter Zijlstra wrote:
> On Fri, Dec 01, 2017 at 06:19:50AM -0800, Tejun Heo wrote:
> 
> > What do you think?  Would this be something worth pursuing?
> 
> My worry with the whole thing is that it makes PMU scheduling _far_ more
> expensive.
>
> Currently HW PMU scheduling is 'bounded' by the fact that we have
> bounded hardware resources (actually placing the events on these
> resources is already very complex because not every event can go on
> every counter).
>
> We also stop trying to schedule HW events when we find we cannot place
> more.
> 
> If we were to support this sharing thing (and you were correct in noting
> that the specific conditions for matching events is going to be very
> tricky indeed), both the above go out the window.

Understood, but I wonder whether something like this can be made
significantly cheaper and, hopefully, bound.  I could easily be
getting the details wrong, but it doesn't seem like we'd need to
compute much of these dynamically on context switch.

Let's say that we can pre-compute most of mergeable detections and the
value propagation can be pushed to the read time rather than event
time and thus that we can have the same functionality with
insiginficant hot path overhead.  Does that sound like something
acceptable to you?

Thanks.

-- 
tejun

  reply	other threads:[~2017-12-11 15:47 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-12-01 14:19 [RFC] Sharing PMU counters across compatible events Tejun Heo
2017-12-06 11:42 ` Jiri Olsa
2017-12-11 15:34   ` Tejun Heo
2017-12-13 10:18     ` Jiri Olsa
2017-12-13 16:15       ` Tejun Heo
2017-12-06 12:35 ` Peter Zijlstra
2017-12-11 15:47   ` Tejun Heo [this message]
2017-12-12 22:37     ` Peter Zijlstra
2017-12-13 16:18       ` Tejun Heo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20171211154744.GK2421075@devbig577.frc2.facebook.com \
    --to=tj@kernel.org \
    --cc=acme@kernel.org \
    --cc=alexander.shishkin@linux.intel.com \
    --cc=alexey.budankov@linux.intel.com \
    --cc=jolsa@redhat.com \
    --cc=kernel-team@fb.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=namhyung@kernel.org \
    --cc=peterz@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox