public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Alexey Budankov <alexey.budankov@linux.intel.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>,
	Arnaldo Carvalho de Melo <acme@kernel.org>,
	Alexander Shishkin <alexander.shishkin@linux.intel.com>,
	Andi Kleen <ak@linux.intel.com>, Kan Liang <kan.liang@intel.com>,
	Dmitri Prokhorov <Dmitry.Prohorov@intel.com>,
	Valery Cherepennikov <valery.cherepennikov@intel.com>,
	David Carrillo-Cisneros <davidcc@google.com>,
	Stephane Eranian <eranian@google.com>,
	Mark Rutland <mark.rutland@arm.com>,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2]: perf/core: addressing 4x slowdown during per-process, profiling of STREAM benchmark on Intel Xeon Phi
Date: Mon, 29 May 2017 12:15:14 +0300	[thread overview]
Message-ID: <75f031d8-68ec-4cd6-752f-1fbecaa86026@linux.intel.com> (raw)
In-Reply-To: <20170529074636.tjftcdtcg6op74i3@hirez.programming.kicks-ass.net>

On 29.05.2017 10:46, Peter Zijlstra wrote:
> On Sat, May 27, 2017 at 02:19:51PM +0300, Alexey Budankov wrote:
>> @@ -571,6 +587,27 @@ struct perf_event {
>>   	 * either sufficies for read.
>>   	 */
>>   	struct list_head		group_entry;
>> +	/*
>> +	 * Node on the pinned or flexible tree located at the event context;
>> +	 * the node may be empty in case its event is not directly attached
>> +	 * to the tree but to group_list list of the event directly
>> +	 * attached to the tree;
>> +	 */
>> +	struct rb_node			group_node;
>> +	/*
>> +	 * List keeps groups allocated for the same cpu;
>> +	 * the list may be empty in case its event is not directly
>> +	 * attached to the tree but to group_list list of the event directly
>> +	 * attached to the tree;
>> +	 */
>> +	struct list_head		group_list;
>> +	/*
>> +	 * Entry into the group_list list above;
>> +	 * the entry may be attached to the self group_list list above
>> +	 * in case the event is directly attached to the pinned or
>> +	 * flexible tree;
>> +	 */
>> +	struct list_head		group_list_entry;
>>   	struct list_head		sibling_list;
>>
>>   	/*
> 
>> @@ -742,7 +772,17 @@ struct perf_event_context {
>>
>>   	struct list_head		active_ctx_list;
>>   	struct list_head		pinned_groups;
>> +	/*
>> +	 * Cpu tree for pinned groups; keeps event's group_node nodes
>> +	 * of attached flexible groups;
>> +	 */
>> +	struct rb_root			pinned_tree;
>>   	struct list_head		flexible_groups;
>> +	/*
>> +	 * Cpu tree for flexible groups; keeps event's group_node nodes
>> +	 * of attached flexible groups;
>> +	 */
>> +	struct rb_root			flexible_tree;
>>   	struct list_head		event_list;
>>   	int				nr_events;
>>   	int				nr_active;
>> @@ -758,6 +798,7 @@ struct perf_event_context {
>>   	 */
>>   	u64				time;
>>   	u64				timestamp;
>> +	struct perf_event_tstamp	tstamp_data;
>>
>>   	/*
>>   	 * These fields let us detect when two contexts have both
> 
> 
> So why do we now have a list _and_ a tree for the same entries?
We need groups list to iterate through all groups configured for 
collection and we need the tree to quickly iterate through the groups 
allocated for a particular CPU only.
> 
> 

-Alexey

  reply	other threads:[~2017-05-29  9:15 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-05-26 22:13 [PATCH]: perf/core: addressing 4x slowdown during per-process profiling of STREAM benchmark on Intel Xeon Phi Alexey Budankov
2017-05-27 11:19 ` [PATCH v2]: perf/core: addressing 4x slowdown during per-process, " Alexey Budankov
2017-05-29  7:45   ` Peter Zijlstra
2017-05-29  9:24     ` Alexey Budankov
2017-05-29 10:33       ` Peter Zijlstra
2017-05-29 10:46         ` Alexey Budankov
2017-05-29  7:46   ` Peter Zijlstra
2017-05-29  9:15     ` Alexey Budankov [this message]
2017-05-29 10:43       ` Peter Zijlstra
2017-05-29 10:56         ` Alexey Budankov
2017-05-29 11:23           ` Peter Zijlstra
2017-05-29 11:45             ` Alexey Budankov
2017-06-15 17:42               ` Alexey Budankov
2017-06-21 15:39                 ` Alexey Budankov
2017-06-30 10:22                   ` Alexey Budankov
2017-05-31 21:33   ` David Carrillo-Cisneros
2017-06-14 11:27     ` Alexey Budankov
2017-05-29 12:03 ` [PATCH]: perf/core: addressing 4x slowdown during per-process " Alexander Shishkin
2017-05-29 13:43   ` Alexey Budankov
2017-05-29 15:22     ` Peter Zijlstra
2017-05-29 15:29       ` Peter Zijlstra
2017-05-29 16:41         ` Alexey Budankov
2017-05-30  8:29     ` Alexander Shishkin
2017-06-14 10:07       ` Alexey Budankov
2017-06-15 17:44         ` Alexey Budankov
  -- strict thread matches above, loose matches on Subject: below --
2017-05-31  0:04 [PATCH v2]: perf/core: addressing 4x slowdown during per-process, " Arun Kalyanasundaram
2017-06-14 12:26 ` Alexey Budankov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=75f031d8-68ec-4cd6-752f-1fbecaa86026@linux.intel.com \
    --to=alexey.budankov@linux.intel.com \
    --cc=Dmitry.Prohorov@intel.com \
    --cc=acme@kernel.org \
    --cc=ak@linux.intel.com \
    --cc=alexander.shishkin@linux.intel.com \
    --cc=davidcc@google.com \
    --cc=eranian@google.com \
    --cc=kan.liang@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=valery.cherepennikov@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox