From: Peter Zijlstra <peterz@infradead.org>
To: Frederic Weisbecker <fweisbec@gmail.com>
Cc: LKML <linux-kernel@vger.kernel.org>, Ingo Molnar <mingo@elte.hu>,
Paul Mackerras <paulus@samba.org>,
Steven Rostedt <rostedt@goodmis.org>,
Masami Hiramatsu <mhiramat@redhat.com>,
Jason Baron <jbaron@redhat.com>,
Arnaldo Carvalho de Melo <acme@redhat.com>
Subject: Re: [RFC PATCH] perf: Store relevant events in a hlist
Date: Wed, 10 Mar 2010 21:46:40 +0100 [thread overview]
Message-ID: <1268254000.5279.167.camel@twins> (raw)
In-Reply-To: <20100310203338.GA9737@nowhere>
On Wed, 2010-03-10 at 21:33 +0100, Frederic Weisbecker wrote:
> On Wed, Mar 10, 2010 at 08:34:52PM +0100, Peter Zijlstra wrote:
> > I'm not quite sure why you need the node thing, you already have a
> > hash-bucket to iterate, simply stick all events into the one bucket and
> > walk through it with a filter and process all events that match.
>
>
> This inter level of indirection was one of my heaviest hesitations.
> In case we have a hash collision, I just wanted to ensure we keep
> an amortized O(n) in any case, that at the cost of this level of
> indirection. Plus that removed the config:id check in every events,
> as the check is made only once.
>
> That said I guess we can indeed remove that and have the events
> directly in the hash bucket. Assuming we deal well to avoid
> collisions, it should be fine.
Right, lets start simple and go from there.
> > As to all those for_each_online_cpu() thingies, it might make sense to
> > also have a global hash-table for events active on all cpus,... hmm was
> > that the reason for the node thing, one event cannot be in multiple
> > buckets?
>
>
> There are several reasons I've made it per cpu.
> Assuming we have a global hash table for wide events, it means we'll
> have some cache dance each time an event is disabled/enabled (which
> is quite often as wide events are per task, even worst if the initial task
> has numerous threads that have this event duplicated). Also, as wide
> events mean per task, the event will always be active in one cpu at
> a time, it would be wasteful to check it on other cpus.
Thing is, most events generated by perf are per cpu, even the per task
ones, if they are machine wide the hash table bounces aren't the biggest
problem.
But yeah..
next prev parent reply other threads:[~2010-03-10 20:47 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-03-05 7:00 [PATCH 1/2] perf: Drop the obsolete profile naming for trace events Frederic Weisbecker
2010-03-05 7:00 ` [PATCH 2/2] perf: Walk through the relevant events only Frederic Weisbecker
2010-03-05 9:39 ` Peter Zijlstra
2010-03-05 17:03 ` Frederic Weisbecker
2010-03-05 17:20 ` Peter Zijlstra
2010-03-05 17:33 ` Frederic Weisbecker
2010-03-05 17:39 ` Peter Zijlstra
2010-03-05 17:46 ` Frederic Weisbecker
2010-03-08 18:35 ` [RFC PATCH] perf: Store relevant events in a hlist Frederic Weisbecker
2010-03-10 19:34 ` Peter Zijlstra
2010-03-10 20:33 ` Frederic Weisbecker
2010-03-10 20:46 ` Peter Zijlstra [this message]
2010-03-10 21:04 ` Frederic Weisbecker
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1268254000.5279.167.camel@twins \
--to=peterz@infradead.org \
--cc=acme@redhat.com \
--cc=fweisbec@gmail.com \
--cc=jbaron@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mhiramat@redhat.com \
--cc=mingo@elte.hu \
--cc=paulus@samba.org \
--cc=rostedt@goodmis.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox