public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Mark Rutland <mark.rutland@arm.com>
To: David Carrillo-Cisneros <davidcc@google.com>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Ingo Molnar <mingo@redhat.com>,
	linux-kernel <linux-kernel@vger.kernel.org>,
	Arnaldo Carvalho de Melo <acme@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Alexander Shishkin <alexander.shishkin@linux.intel.com>,
	vince@deater.net, Stephane Eranian <eranian@google.com>,
	Andi Kleen <andi@firstfloor.org>, Kan Liang <kan.liang@intel.com>,
	Vikas Shivappa <vikas.shivappa@linux.intel.com>
Subject: Re: [PATCH] perf/core: introduce context per CPU event list
Date: Tue, 3 Jan 2017 12:00:28 +0000	[thread overview]
Message-ID: <20170103120028.GD5605@leverpostej> (raw)
In-Reply-To: <CALcN6miCejPbqGnSj5wNY18BGeS-pQmeV07RM4FN3+fL1hnwxw@mail.gmail.com>

On Sun, Jan 01, 2017 at 01:18:27PM -0800, David Carrillo-Cisneros wrote:
> On Sun, Jan 1, 2017 at 12:20 PM, David Carrillo-Cisneros
> <davidcc@google.com> wrote:
> > From: Mark Rutland <mark.rutland@arm.com>
> >
> > On Thu, Nov 10, 2016 at 05:26:32PM +0100, Peter Zijlstra wrote:
> >> On Thu, Nov 10, 2016 at 02:10:37PM +0000, Mark Rutland wrote:
> >>
> >> > Sure, that sounds fine for scheduling (including big.LITTLE).
> >> >
> >> > I might still be misunderstanding something, but I don't think that
> >> > helps Kan's case: since INACTIVE events which will fail their filters
> >> > (including the CPU check) will still be in the tree, they will still
> >> > have to be iterated over.
> >> >
> >> > That is, unless we also sort the tree by event->cpu, or if in those
> >> > cases we only care about ACTIVE events and can use an active list.
> >>
> >> A few emails back up I wrote:
> >>
> >> >> If we stick all events in an RB-tree sorted on: {pmu,cpu,runtime} we
> >
> > Ah, sorry. Clearly I wouldn't pass a reading comprehension test today.
> >
> >> Looking at the code there's also cgroup muck, not entirely sure where in
> >> the sort order that should go if at all.
> >>
> >> But having pmu and cpu in there would cure the big-little and
> >> per-task-per-cpu event issues.
> >
> > Yup, that all makes sense to me now (modulo the cgroup stuff I also
> > haven't considered yet).
> 
> cgroup events are stored in each pmu's cpuctx, so they wouldn't benefit
> from a pmu,cpu sort order. Yet the RB-tree would help if it could use cgroup
> as key for cpu contexts.
> 
> Is there a reason to have runtime as part of the RB-tree?

Fairer scheduling of the events, especially where cross-group conflicts
for PMU resources are non-trivial to solve. IIRC this is the major
reason Peter wanted the RB tree in the first place.

> Couldn't a FIFO list work just fine? A node could have an ACTIVE and
> an INACTIVE FIFO list and just move the events in out the tree in ioctl and
> to/from ACTIVE from/to INACTIVE on sched in/out.
> This would speed up both sched in and sched out.
> 
> The node would be something like this:
> 
> struct ctx_rbnode {
>         struct rb_node node;
>         struct list_head active_events;
>         struct list_head inactive_events;
> };
> 
> And the insertion order would be {pmu, cpu} for task contexts (cpu == -1
> for events without fixed cpu) and {cgroup} for cpuctxs (CPU events would
> have NULL cgrp).

The problem with using a list rather than a tree is that we have to
perform a linear walk of the list every time we want to find the
relevant sub-list (e.g. scheduling, insertion). Using a tree makes
finding the relevant portion much cheaper.

> Am I interested on getting this to work as part of the cgroup context switch
> optimization that CQM/CMT needs. See discussion in:
> 
> https://patchwork.kernel.org/patch/9478617/
> 
> Is anyone actively working on it?

Unfortunately I have too much on my plate at the moment; I'm not
actively working on this.

I'm happy to review and test patches, though!

Thanks,
Mark.

  reply	other threads:[~2017-01-03 12:01 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-11-09 19:04 [PATCH] perf/core: introduce context per CPU event list kan.liang
2016-11-10  8:33 ` Peter Zijlstra
2016-11-10 11:05   ` Mark Rutland
2016-11-10 11:37     ` Peter Zijlstra
2016-11-10 12:04       ` Mark Rutland
2016-11-10 12:12         ` Peter Zijlstra
2016-11-10 12:26           ` Mark Rutland
2016-11-10 12:58             ` Peter Zijlstra
2016-11-10 14:10               ` Mark Rutland
2016-11-10 14:31                 ` Liang, Kan
2016-11-10 16:26                 ` Peter Zijlstra
2016-11-10 17:01                   ` Mark Rutland
     [not found] ` <1483302059-4334-1-git-send-email-davidcc@google.com>
2017-01-01 21:18   ` David Carrillo-Cisneros
2017-01-03 12:00     ` Mark Rutland [this message]
2017-01-04  0:39       ` David Carrillo-Cisneros

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170103120028.GD5605@leverpostej \
    --to=mark.rutland@arm.com \
    --cc=acme@kernel.org \
    --cc=alexander.shishkin@linux.intel.com \
    --cc=andi@firstfloor.org \
    --cc=davidcc@google.com \
    --cc=eranian@google.com \
    --cc=kan.liang@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=tglx@linutronix.de \
    --cc=vikas.shivappa@linux.intel.com \
    --cc=vince@deater.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox