public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Frederic Weisbecker <fweisbec@gmail.com>
To: Stephane Eranian <eranian@google.com>
Cc: Peter Zijlstra <peterz@infradead.org>,
	linux-kernel@vger.kernel.org, mingo@elte.hu, paulus@samba.org,
	davem@davemloft.net, perfmon2-devel@lists.sf.net,
	eranian@gmail.com
Subject: Re: [PATCH] perf_events: improve x86 event scheduling (v5)
Date: Thu, 21 Jan 2010 11:45:15 +0100	[thread overview]
Message-ID: <20100121104513.GA5017@nowhere> (raw)
In-Reply-To: <bd4cb8901001210208h758a546cw19fc81300164ec55@mail.gmail.com>

On Thu, Jan 21, 2010 at 11:08:12AM +0100, Stephane Eranian wrote:
> >> > Do you mean this:
> >> >
> >> > hw_perf_group_sched_in_begin(&x86_pmu);
> >> >
> >> > for_each_event(event, group) {
> >> >         event->enable();        //do the collection here
> >> > }
> >> >
> >> >
> >> > if (hw_perf_group_sched_in_end(&x86_pmu)) {
> >> >         rollback...
> >> > }
> >> >
> >> > That requires to know in advance if we have hardware pmu
> >> > in the list though (can be a flag in the group).
> >>
> 
> I don't think this model can work without scheduling for each event.
> 
> Imagine the situation where you have more events than you have
> counters. At each tick you:
>    - disable all events
>    - rotate the list
>    - collect events from the list
>    - schedule events
>    - activate
> 
> Collection is the accumulation of events until you have as many as you
> have counters
> given you defer scheduling until the end (see loop above).
> 
> But that does not mean you can schedule what you have accumulated. And then what
> do you do, i.e., rollback to what?



If the scheduling validation fails, then you just need to rollback
the whole group.

That's sensibly what you did in your patch, right? Except the loop
is now handled by the core code.


> 
> With incremental, you can skip a group that is conflicting with the
> groups already
> accumulated. What hw_perf_group_sched_in() gives you is simply a way to do
> incremental on a whole event group at once.


I don't understand why that can't be done with the above model.
In your patch we iterate through the whole group, collect events,
and schedule them.

With the above, the collection is just done on enable(), and the scheduling
is done with the new pmu callbacks.

The thing is sensibly the same, where is the obstacle?


> 
> Given the perf_event model, I believe you have no other way but to do
> incremental
> scheduling of events. That is the only way you guarantee you maximize the use of
> the PMU. Regardless of that, the scheduling model has a bias towards smaller
> and less constrained event groups.


But the incremental is still the purpose of the above model. I feel
confused.


  parent reply	other threads:[~2010-01-21 10:45 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-01-18  8:58 [PATCH] perf_events: improve x86 event scheduling (v5) Stephane Eranian
2010-01-18 13:43 ` Frederic Weisbecker
2010-01-18 13:54   ` Peter Zijlstra
2010-01-18 14:12     ` Stephane Eranian
2010-01-18 14:20       ` Peter Zijlstra
2010-01-18 14:33       ` Peter Zijlstra
2010-01-18 14:20     ` Frederic Weisbecker
2010-01-18 14:32       ` Peter Zijlstra
2010-01-18 14:45         ` Frederic Weisbecker
2010-01-18 14:56           ` Peter Zijlstra
2010-01-18 16:18             ` Frederic Weisbecker
2010-01-18 16:26               ` Peter Zijlstra
2010-01-18 16:51                 ` Frederic Weisbecker
2010-01-18 17:13                   ` Peter Zijlstra
2010-01-18 17:29                     ` Frederic Weisbecker
2010-01-18 20:01                       ` Peter Zijlstra
2010-01-19 12:22                       ` Peter Zijlstra
2010-01-19 13:24                         ` Peter Zijlstra
2010-01-19 15:55                           ` Frederic Weisbecker
2010-01-19 16:25                             ` Peter Zijlstra
2010-02-27 17:38                               ` Frederic Weisbecker
2010-01-19 15:40                         ` Frederic Weisbecker
2010-01-21 10:08           ` Stephane Eranian
2010-01-21 10:11             ` Peter Zijlstra
2010-01-21 10:21               ` Stephane Eranian
2010-01-21 10:28                 ` Peter Zijlstra
2010-01-21 10:38                   ` Stephane Eranian
2010-01-21 10:45             ` Frederic Weisbecker [this message]
2010-01-21 11:44               ` Stephane Eranian
2010-01-21 12:02                 ` Frederic Weisbecker
2010-01-18 14:37       ` Peter Zijlstra
2010-01-18 14:53         ` Frederic Weisbecker
2010-01-18 14:59           ` Peter Zijlstra
2010-01-18 16:22             ` Frederic Weisbecker
2010-01-21 10:36 ` Peter Zijlstra
2010-01-21 10:43   ` Stephane Eranian
2010-01-21 10:46     ` Peter Zijlstra
2010-01-21 14:06       ` Stephane Eranian
2010-01-21 13:55 ` [tip:perf/urgent] perf: x86: Add support for the ANY bit tip-bot for Stephane Eranian
2010-01-29  9:26 ` [tip:perf/core] perf_events, x86: Improve x86 event scheduling tip-bot for Stephane Eranian

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20100121104513.GA5017@nowhere \
    --to=fweisbec@gmail.com \
    --cc=davem@davemloft.net \
    --cc=eranian@gmail.com \
    --cc=eranian@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=paulus@samba.org \
    --cc=perfmon2-devel@lists.sf.net \
    --cc=peterz@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox