From: Mike Galbraith <efault@gmx.de>
To: Suresh Siddha <suresh.b.siddha@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>,
Ingo Molnar <mingo@elte.hu>, Venki Pallipadi <venki@google.com>,
Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>,
linux-kernel <linux-kernel@vger.kernel.org>,
Tim Chen <tim.c.chen@linux.jf.intel.com>,
alex.shi@intel.com
Subject: Re: [patch v3 5/6] sched, ttwu_queue: queue remote wakeups only when crossing cache domains
Date: Fri, 02 Dec 2011 04:34:24 +0100 [thread overview]
Message-ID: <1322796864.4755.5.camel@marge.simson.net> (raw)
In-Reply-To: <20111202010832.714874234@sbsiddha-desk.sc.intel.com>
On Thu, 2011-12-01 at 17:07 -0800, Suresh Siddha wrote:
> plain text document attachment
> (use_ttwu_queue_when_crossing_cache_domains.patch)
> From: Mike Galbraith <efault@gmx.de>
>
> Context-switch intensive microbenchmark on a 8-socket system had
> ~600K times more resched IPI's on each logical CPU because of the
> TTWU_QUEUE sched feature, which queues the task on the remote cpu's
> queue and completes the wakeup locally using an IPI.
>
> As the TTWU_QUEUE sched feature is for minimizing the cache-misses
> associated with the remote wakeups, use the IPI only when the local and
> the remote cpu's are from different cache domains. Otherwise use the
> traditional remote wakeup.
FYI, Peter has already (improved and) queued this patch.
> With this, context-switch microbenchmark performed 5 times better on the
> 8-socket NHM-EX system.
>
> Signed-off-by: Mike Galbraith <efault@gmx.de>
> Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
> ---
> kernel/sched/core.c | 25 ++++++++++++++++++++++++-
> 1 file changed, 24 insertions(+), 1 deletion(-)
>
> Index: tip/kernel/sched/core.c
> ===================================================================
> --- tip.orig/kernel/sched/core.c
> +++ tip/kernel/sched/core.c
> @@ -1481,12 +1481,35 @@ static int ttwu_activate_remote(struct t
> #endif /* __ARCH_WANT_INTERRUPTS_ON_CTXSW */
> #endif /* CONFIG_SMP */
>
> +static int ttwu_share_cache(int this_cpu, int cpu)
> +{
> +#ifndef CONFIG_X86
> + struct sched_domain *sd;
> + int ret = 0;
> +
> + rcu_read_lock();
> + for_each_domain(this_cpu, sd) {
> + if (!cpumask_test_cpu(cpu, sched_domain_span(sd)))
> + continue;
> +
> + ret = (sd->flags & SD_SHARE_PKG_RESOURCES);
> + break;
> + }
> + rcu_read_unlock();
> +
> + return ret;
> +#else
> + return per_cpu(cpu_llc_id, this_cpu) == per_cpu(cpu_llc_id, cpu);
> +#endif
> +}
> +
> static void ttwu_queue(struct task_struct *p, int cpu)
> {
> struct rq *rq = cpu_rq(cpu);
>
> #if defined(CONFIG_SMP)
> - if (sched_feat(TTWU_QUEUE) && cpu != smp_processor_id()) {
> + if (sched_feat(TTWU_QUEUE) &&
> + !ttwu_share_cache(smp_processor_id(), cpu)) {
> sched_clock_cpu(cpu); /* sync clocks x-cpu */
> ttwu_queue_remote(p, cpu);
> return;
>
>
next prev parent reply other threads:[~2011-12-02 3:34 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-12-02 1:07 [patch v3 0/6] nohz idle load balancing patches Suresh Siddha
2011-12-02 1:07 ` [patch v3 1/6] sched, nohz: introduce nohz_flags in the struct rq Suresh Siddha
2011-12-06 9:53 ` [tip:sched/core] sched, nohz: Introduce nohz_flags in 'struct rq' tip-bot for Suresh Siddha
2011-12-06 12:14 ` [patch v3 1/6] sched, nohz: introduce nohz_flags in the struct rq Srivatsa Vaddagiri
2011-12-06 19:26 ` Suresh Siddha
2011-12-06 19:39 ` Peter Zijlstra
2011-12-06 20:24 ` [tip:sched/core] sched, nohz: Set the NOHZ_BALANCE_KICK flag for idle load balancer tip-bot for Suresh Siddha
2011-12-02 1:07 ` [patch v3 2/6] sched, nohz: track nr_busy_cpus in the sched_group_power Suresh Siddha
2011-12-06 9:54 ` [tip:sched/core] sched, nohz: Track " tip-bot for Suresh Siddha
2011-12-02 1:07 ` [patch v3 3/6] sched, nohz: sched group, domain aware nohz idle load balancing Suresh Siddha
2011-12-06 6:37 ` Srivatsa Vaddagiri
2011-12-06 19:19 ` Suresh Siddha
2011-12-06 20:24 ` [tip:sched/core] sched, nohz: Fix the idle cpu check in nohz_idle_balance tip-bot for Suresh Siddha
[not found] ` <A75BCAD09CE00A4280CDD4429D85F1F9261B42A1F9@orsmsx501.amr.corp.intel.com>
2011-12-06 19:27 ` [patch v3 3/6] sched, nohz: sched group, domain aware nohz idle load balancing Suresh Siddha
2011-12-06 9:54 ` [tip:sched/core] sched, nohz: Implement " tip-bot for Suresh Siddha
2011-12-02 1:07 ` [patch v3 4/6] sched, nohz: cleanup the find_new_ilb() using sched groups nr_busy_cpus Suresh Siddha
2011-12-06 9:55 ` [tip:sched/core] sched, nohz: Clean up " tip-bot for Suresh Siddha
2011-12-02 1:07 ` [patch v3 5/6] sched, ttwu_queue: queue remote wakeups only when crossing cache domains Suresh Siddha
2011-12-02 3:34 ` Mike Galbraith [this message]
2011-12-07 16:23 ` Peter Zijlstra
2011-12-07 19:20 ` Suresh Siddha
2011-12-08 6:06 ` Mike Galbraith
2011-12-08 9:41 ` Peter Zijlstra
2011-12-08 9:29 ` Peter Zijlstra
2011-12-08 19:34 ` Suresh Siddha
2011-12-08 21:50 ` Peter Zijlstra
2011-12-08 21:51 ` Peter Zijlstra
2011-12-08 10:02 ` Peter Zijlstra
2011-12-21 11:41 ` [tip:sched/core] sched: Only queue remote wakeups when crossing cache boundaries tip-bot for Peter Zijlstra
2011-12-02 1:07 ` [patch v3 6/6] sched: fix the sched group node allocation for SD_OVERLAP domain Suresh Siddha
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1322796864.4755.5.camel@marge.simson.net \
--to=efault@gmx.de \
--cc=alex.shi@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=peterz@infradead.org \
--cc=suresh.b.siddha@intel.com \
--cc=tim.c.chen@linux.jf.intel.com \
--cc=vatsa@linux.vnet.ibm.com \
--cc=venki@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox