public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Mike Galbraith <efault@gmx.de>
To: Suresh Siddha <suresh.b.siddha@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Ingo Molnar <mingo@elte.hu>, Venki Pallipadi <venki@google.com>,
	Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>,
	linux-kernel <linux-kernel@vger.kernel.org>,
	Tim Chen <tim.c.chen@linux.jf.intel.com>,
	alex.shi@intel.com
Subject: Re: [patch 5/6] sched: disable sched feature TTWU_QUEUE by default
Date: Sat, 19 Nov 2011 05:30:09 +0100	[thread overview]
Message-ID: <1321677009.6307.13.camel@marge.simson.net> (raw)
In-Reply-To: <20111118230554.105376150@sbsiddha-desk.sc.intel.com>

On Fri, 2011-11-18 at 15:03 -0800, Suresh Siddha wrote:
> plain text document attachment (disable_sched_ttwu_queue.patch)
> Context-switch intensive microbenchmark on a 8-socket system had
> ~600K times more resched IPI's on each logical CPU with this feature enabled
> by default. Disabling this features makes that microbenchmark perform 5 times
> better.
> 
> Also disabling this feature showed 2% performance improvement on a 8-socket
> OLTP workload.
> 
> More heurestics are needed when and how to use this feature by default.
> For now, disable it by default.

Yeah, the overhead for very hefty switchers is high enough to increase
TCP_RR latency up to 13% in my testing.  I used a trylock() to generally
not eat that, but leave the contended case improvement intact.

Peter suggested trying doing the IPI only when crossing cache
boundaries, which worked for me as well.

---
 kernel/sched.c |   24 +++++++++++++++++++++++-
 1 file changed, 23 insertions(+), 1 deletion(-)

Index: linux-3.0-tip/kernel/sched.c
===================================================================
--- linux-3.0-tip.orig/kernel/sched.c
+++ linux-3.0-tip/kernel/sched.c
@@ -2784,12 +2784,34 @@ static int ttwu_activate_remote(struct t
 #endif /* __ARCH_WANT_INTERRUPTS_ON_CTXSW */
 #endif /* CONFIG_SMP */
 
+static int ttwu_share_cache(int this_cpu, int cpu)
+{
+#ifndef CONFIG_X86
+	struct sched_domain *sd;
+	int ret = 0;
+
+	rcu_read_lock();
+	for_each_domain(this_cpu, sd) {
+		if (!cpumask_test_cpu(cpu, sched_domain_span(sd)))
+			continue;
+
+		ret = (sd->flags & SD_SHARE_PKG_RESOURCES);
+		break;
+	}
+	rcu_read_unlock();
+
+	return ret;
+#else
+	return per_cpu(cpu_llc_id, this_cpu) == per_cpu(cpu_llc_id, cpu);
+#endif
+}
+
 static void ttwu_queue(struct task_struct *p, int cpu)
 {
 	struct rq *rq = cpu_rq(cpu);
 
 #if defined(CONFIG_SMP)
-	if (sched_feat(TTWU_QUEUE) && cpu != smp_processor_id()) {
+	if (sched_feat(TTWU_QUEUE) && !ttwu_share_cache(smp_processor_id(), cpu)) {
 		sched_clock_cpu(cpu); /* sync clocks x-cpu */
 		ttwu_queue_remote(p, cpu);
 		return;
  


  reply	other threads:[~2011-11-19  4:30 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-11-18 23:03 [patch 0/6] sched, nohz: load balancing patches Suresh Siddha
2011-11-18 23:03 ` [patch 1/6] sched, nohz: introduce nohz_flags in the struct rq Suresh Siddha
2011-11-24 10:24   ` Peter Zijlstra
2011-11-28 23:59     ` Suresh Siddha
2011-11-29  9:47       ` Peter Zijlstra
2011-11-18 23:03 ` [patch 2/6] sched, nohz: track nr_busy_cpus in the sched_group_power Suresh Siddha
2011-11-18 23:03 ` [patch 3/6] sched, nohz: sched group, domain aware nohz idle load balancing Suresh Siddha
2011-11-24 11:47   ` Peter Zijlstra
2011-11-28 23:51     ` Suresh Siddha
2011-11-29  9:44       ` Peter Zijlstra
2011-12-01  1:03         ` Suresh Siddha
2011-12-01  1:17         ` Suresh Siddha
2011-12-01  8:36           ` Peter Zijlstra
2011-11-24 11:53   ` Peter Zijlstra
2011-11-28 23:58     ` Suresh Siddha
2011-11-29  9:45       ` Peter Zijlstra
2011-11-18 23:03 ` [patch 4/6] sched, nohz: cleanup the find_new_ilb() using sched groups nr_busy_cpus Suresh Siddha
2011-11-18 23:03 ` [patch 5/6] sched: disable sched feature TTWU_QUEUE by default Suresh Siddha
2011-11-19  4:30   ` Mike Galbraith [this message]
2011-11-19  4:41     ` Mike Galbraith
2011-11-18 23:03 ` [patch 6/6] sched: fix the sched group node allocation for SD_OVERLAP domain Suresh Siddha
2011-12-06  9:51   ` [tip:sched/core] sched: Fix the sched group node allocation for SD_OVERLAP domains tip-bot for Suresh Siddha

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1321677009.6307.13.camel@marge.simson.net \
    --to=efault@gmx.de \
    --cc=alex.shi@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=peterz@infradead.org \
    --cc=suresh.b.siddha@intel.com \
    --cc=tim.c.chen@linux.jf.intel.com \
    --cc=vatsa@linux.vnet.ibm.com \
    --cc=venki@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox