linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] sched: update_top_cache_domain only at the times of building sched domain.
@ 2013-07-23 17:42 Rakib Mullick
  2013-07-24  3:26 ` Michael Wang
  0 siblings, 1 reply; 8+ messages in thread
From: Rakib Mullick @ 2013-07-23 17:42 UTC (permalink / raw)
  To: mingo, peterz; +Cc: linux-kernel

Currently, update_top_cache_domain() is called whenever schedule domain is built or destroyed. But, the following
callpath shows that they're at the same callpath and can be avoided update_top_cache_domain() while destroying schedule
domain and update only at the times of building schedule domains.

	partition_sched_domains()
		detach_destroy_domain()
			cpu_attach_domain()
				update_top_cache_domain()
		build_sched_domains()
			cpu_attach_domain()
				update_top_cache_domain()

Changes since v1: use sd to determine when to skip, courtesy PeterZ

Signed-off-by: Rakib Mullick <rakib.mullick@gmail.com>
---

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index b7c32cb..387fb66 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5138,7 +5138,8 @@ cpu_attach_domain(struct sched_domain *sd, struct root_domain *rd, int cpu)
 	rcu_assign_pointer(rq->sd, sd);
 	destroy_sched_domains(tmp, cpu);
 
-	update_top_cache_domain(cpu);
+	if (sd)
+		update_top_cache_domain(cpu);
 }
 
 /* cpus with isolated domains */




^ permalink raw reply related	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2013-07-25  3:15 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-07-23 17:42 [PATCH v2] sched: update_top_cache_domain only at the times of building sched domain Rakib Mullick
2013-07-24  3:26 ` Michael Wang
2013-07-24  8:01   ` Rakib Mullick
2013-07-24  8:34     ` Michael Wang
2013-07-24 10:49       ` Peter Zijlstra
2013-07-25  2:49         ` Michael Wang
2013-07-24 13:57       ` Rakib Mullick
2013-07-25  3:15         ` Michael Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).