From: "Alex,Shi" <alex.shi@intel.com>
To: "Siddha, Suresh B" <suresh.b.siddha@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>,
Venkatesh Pallipadi <venki@google.com>,
Ingo Molnar <mingo@elte.hu>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
Paul Turner <pjt@google.com>, Mike Galbraith <efault@gmx.de>,
Nick Piggin <npiggin@gmail.com>,
"Chen, Tim C" <tim.c.chen@intel.com>
Subject: Re: [PATCH] sched: Resolve sd_idle and first_idle_cpu Catch-22 - v1
Date: Fri, 18 Feb 2011 09:05:53 +0800 [thread overview]
Message-ID: <1297991153.14712.636.camel@debian> (raw)
In-Reply-To: <1297473616.2806.16.camel@sbsiddha-MOBL3.sc.intel.com>
> I am also ok with removing this code. But as Venki mentioned earlier
> (http://marc.info/?l=linux-kernel&m=129735866732171&w=2), we need to
> make sure idle core gets priority instead of an idle smt-thread on a
> busy core while pulling the load from the busiest socket.
>
> I requested Venki to post these 2 patches of removing the propagation of
> busy sibling status to an idle sibling and prioritizing the idle core
> while pulling the load. I will request Alex and Tim to run their
> performance workloads to make sure that this doesn't show any
> regressions.
I have got sd_idle deletion and other patches. So just tested this v1
patch based on 38-rc4 kernel on WSM-EP NHM-EP, and Core2 machine, didn't
find clear performance regression or improvement, include on
hackbench/specjbb/volano etc.
>
> thanks,
> suresh
>
prev parent reply other threads:[~2011-02-18 1:26 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-02-04 20:51 [PATCH] sched: Resolve sd_idle and first_idle_cpu Catch-22 Venkatesh Pallipadi
2011-02-04 21:25 ` [PATCH] sched: Resolve sd_idle and first_idle_cpu Catch-22 - v1 Venkatesh Pallipadi
2011-02-07 13:50 ` Peter Zijlstra
2011-02-07 18:21 ` Venkatesh Pallipadi
2011-02-07 19:53 ` Suresh Siddha
2011-02-08 17:37 ` Venkatesh Pallipadi
2011-02-08 18:13 ` Misc sd_idle related fixes Venkatesh Pallipadi
2011-02-09 9:29 ` Peter Zijlstra
2011-02-10 17:24 ` Venkatesh Pallipadi
2011-02-08 18:13 ` [PATCH 1/3] sched: Resolve sd_idle and first_idle_cpu Catch-22 Venkatesh Pallipadi
2011-02-08 18:13 ` [PATCH 2/3] sched: fix_up broken SMT load balance dilation Venkatesh Pallipadi
2011-02-08 18:13 ` [PATCH 3/3] sched: newidle balance set idle_timestamp only on successful pull Venkatesh Pallipadi
2011-02-09 3:37 ` Mike Galbraith
2011-02-09 15:55 ` [PATCH] sched: Resolve sd_idle and first_idle_cpu Catch-22 - v1 Peter Zijlstra
2011-02-12 1:20 ` Suresh Siddha
2011-02-14 22:38 ` [PATCH] sched: Wholesale removal of sd_idle logic Venkatesh Pallipadi
2011-02-15 17:01 ` Vaidyanathan Srinivasan
2011-02-15 18:26 ` Venkatesh Pallipadi
2011-02-16 8:53 ` Vaidyanathan Srinivasan
2011-02-16 11:43 ` Peter Zijlstra
2011-02-16 13:50 ` [tip:sched/core] " tip-bot for Venkatesh Pallipadi
2011-02-15 9:15 ` [PATCH] sched: Resolve sd_idle and first_idle_cpu Catch-22 - v1 Peter Zijlstra
2011-02-15 19:11 ` Suresh Siddha
2011-02-18 1:05 ` Alex,Shi [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1297991153.14712.636.camel@debian \
--to=alex.shi@intel.com \
--cc=efault@gmx.de \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=npiggin@gmail.com \
--cc=peterz@infradead.org \
--cc=pjt@google.com \
--cc=suresh.b.siddha@intel.com \
--cc=tim.c.chen@intel.com \
--cc=venki@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox