public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Frederic Weisbecker <fweisbec@gmail.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>,
	LKML <linux-kernel@vger.kernel.org>,
	mingo@elte.hu, Paul Mackerras <paulus@samba.org>,
	"David S. Miller" <davem@davemloft.net>
Subject: Re: [RFC] perf_events: ctx_flexible_sched_in() not maximizing PMU utilization
Date: Thu, 6 May 2010 19:11:43 +0200	[thread overview]
Message-ID: <20100506171141.GA5562@nowhere> (raw)
In-Reply-To: <1273155640.5605.300.camel@twins>

On Thu, May 06, 2010 at 04:20:40PM +0200, Peter Zijlstra wrote:
> On Thu, 2010-05-06 at 16:03 +0200, Stephane Eranian wrote:
> > Hi,
> > 
> > Looking at ctx_flexible_sched_in(), the logic is that if  group_sched_in()
> > fails for a HW group, then no other HW group in the list is even tried.
> > I don't understand this restriction. Groups are independent of each other.
> > The failure of one group should not block others from being scheduled,
> > otherwise you under-utilize the PMU.
> > 
> > What is the reason for this restriction? Can we lift it somehow?
> 
> Sure, but it will make scheduling much more expensive. The current
> scheme will only ever check the first N events because it stops at the
> first that fails, and since you can max fix N events on the PMU its
> constant time.
> 
> To fix this issue you'd have to basically always iterate all events and
> only stop once the PMU is fully booked, which reduces to an O(n) worst
> case algorithm.
> 
> But yeah, I did think of making the thing an RB-tree and basically
> schedule on service received, that should fix the lop-sided RR we get
> with constrained events.


I don't understand what you mean by schedule on service received, and why
an rbtree would solve that.

Unless you think about giving a weight to a group that has hardware events.
This weight is the number of "slots" a group would use in the hardware pmu
and you can compare this weight against the available remaining slots
in the pmu?

So yeah, if the hw group of events are sorted by weight, once one fails,
we know the following will fail too. But that doesn't seem a right solution
as it is going to always give more chances to low weight groups and very
fewer opportunities for the heavy ones to be scheduled.


  parent reply	other threads:[~2010-05-06 17:11 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-05-06 14:03 [RFC] perf_events: ctx_flexible_sched_in() not maximizing PMU utilization Stephane Eranian
2010-05-06 14:20 ` Peter Zijlstra
2010-05-06 14:41   ` Stephane Eranian
2010-05-06 15:08     ` Peter Zijlstra
2010-05-06 16:26       ` Stephane Eranian
2010-05-06 17:11   ` Frederic Weisbecker [this message]
2010-05-06 17:30     ` Peter Zijlstra
2010-05-07  8:25       ` Peter Zijlstra
2010-05-07  8:44         ` Peter Zijlstra
2010-05-07  9:37         ` Stephane Eranian
2010-05-07 10:06           ` Peter Zijlstra
2010-05-07 10:49             ` Stephane Eranian
2010-05-07 11:15               ` Peter Zijlstra
2010-05-10  9:41                 ` Stephane Eranian
2010-05-14 14:55                   ` Peter Zijlstra
2010-05-14 15:07                     ` Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20100506171141.GA5562@nowhere \
    --to=fweisbec@gmail.com \
    --cc=davem@davemloft.net \
    --cc=eranian@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=paulus@samba.org \
    --cc=peterz@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox