public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Peter Zijlstra <a.p.zijlstra@chello.nl>
To: Ingo Molnar <mingo@elte.hu>, linux-kernel@vger.kernel.org
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>,
	Anton Blanchard <anton@au1.ibm.com>,
	Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>,
	Suresh Siddha <suresh.b.siddha@intel.com>,
	Venkatesh Pallipadi <venki@google.com>,
	Paul Turner <pjt@google.com>, Mike Galbraith <efault@gmx.de>,
	Thomas Gleixner <tglx@linutronix.de>,
	Heiko Carstens <heiko.carstens@de.ibm.com>,
	Andreas Herrmann <andreas.herrmann3@amd.com>,
	Peter Zijlstra <a.p.zijlstra@chello.nl>
Subject: [RFC][PATCH 06/14] Simplify group creation
Date: Mon, 14 Mar 2011 16:06:19 +0100	[thread overview]
Message-ID: <20110314152226.923003754@chello.nl> (raw)
In-Reply-To: 20110314150613.749843433@chello.nl

[-- Attachment #1: sched-foo5.patch --]
[-- Type: text/plain, Size: 3198 bytes --]

Instead of calling build_sched_groups() for each possible sched_domain
we might have created, note that we can simply iterate the
sched_domain tree and call it for each sched_domain present.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <new-submission>
---
 kernel/sched.c |   24 +++++-------------------
 1 file changed, 5 insertions(+), 19 deletions(-)

Index: linux-2.6/kernel/sched.c
===================================================================
--- linux-2.6.orig/kernel/sched.c
+++ linux-2.6/kernel/sched.c
@@ -7207,15 +7207,12 @@ static struct sched_domain *__build_smt_
 	return sd;
 }
 
-static void build_sched_groups(struct s_data *d, enum sched_domain_level l,
+static void build_sched_groups(struct s_data *d, struct sched_domain *sd,
 			       const struct cpumask *cpu_map, int cpu)
 {
-	struct sched_domain *sd;
-
-	switch (l) {
+	switch (sd->level) {
 #ifdef CONFIG_SCHED_SMT
 	case SD_LV_SIBLING: /* set up CPU (sibling) groups */
-		sd = &per_cpu(cpu_domains, cpu).sd;
 		if (cpu == cpumask_first(sched_domain_span(sd)))
 			init_sched_build_groups(sched_domain_span(sd), cpu_map,
 						&cpu_to_cpu_group,
@@ -7224,7 +7221,6 @@ static void build_sched_groups(struct s_
 #endif
 #ifdef CONFIG_SCHED_MC
 	case SD_LV_MC: /* set up multi-core groups */
-		sd = &per_cpu(core_domains, cpu).sd;
 		if (cpu == cpumask_first(sched_domain_span(sd)))
 			init_sched_build_groups(sched_domain_span(sd), cpu_map,
 						&cpu_to_core_group,
@@ -7233,7 +7229,6 @@ static void build_sched_groups(struct s_
 #endif
 #ifdef CONFIG_SCHED_BOOK
 	case SD_LV_BOOK: /* set up book groups */
-		sd = &per_cpu(book_domains, cpu).sd;
 		if (cpu == cpumask_first(sched_domain_span(sd)))
 			init_sched_build_groups(sched_domain_span(sd), cpu_map,
 						&cpu_to_book_group,
@@ -7241,7 +7236,6 @@ static void build_sched_groups(struct s_
 		break;
 #endif
 	case SD_LV_CPU: /* set up physical groups */
-		sd = &per_cpu(phys_domains, cpu).sd;
 		if (cpu == cpumask_first(sched_domain_span(sd)))
 			init_sched_build_groups(sched_domain_span(sd), cpu_map,
 						&cpu_to_phys_group,
@@ -7249,7 +7243,6 @@ static void build_sched_groups(struct s_
 		break;
 #ifdef CONFIG_NUMA
 	case SD_LV_NODE:
-		sd = &per_cpu(node_domains, cpu).sd;
 		if (cpu == cpumask_first(sched_domain_span(sd)))
 			init_sched_build_groups(sched_domain_span(sd), cpu_map,
 						&cpu_to_node_group,
@@ -7299,17 +7292,10 @@ static int __build_sched_domains(const s
 		sd = __build_mc_sched_domain(&d, cpu_map, attr, sd, i);
 		sd = __build_smt_sched_domain(&d, cpu_map, attr, sd, i);
 
-		for (tmp = sd; tmp; tmp = tmp->parent)
+		for (tmp = sd; tmp; tmp = tmp->parent) {
 			tmp->span_weight = cpumask_weight(sched_domain_span(tmp));
-	}
-
-	for_each_cpu(i, cpu_map) {
-		build_sched_groups(&d, SD_LV_SIBLING, cpu_map, i);
-		build_sched_groups(&d, SD_LV_BOOK, cpu_map, i);
-		build_sched_groups(&d, SD_LV_MC, cpu_map, i);
-		build_sched_groups(&d, SD_LV_CPU, cpu_map, i);
-		build_sched_groups(&d, SD_LV_NODE, cpu_map, i);
-		build_sched_groups(&d, SD_LV_ALLNODES, cpu_map, i);
+			build_sched_groups(&d, tmp, cpu_map, i);
+		}
 	}
 
 	/* Calculate CPU power for physical packages and nodes */



  parent reply	other threads:[~2011-03-14 15:27 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-03-14 15:06 [RFC][PATCH 00/14] Rewrite sched_domain/sched_group creation Peter Zijlstra
2011-03-14 15:06 ` [RFC][PATCH 01/14] sched: Remove obsolete arch_ prefixes Peter Zijlstra
2011-03-14 15:06 ` [RFC][PATCH 02/14] sched: Simplify cpu_power initialization Peter Zijlstra
2011-03-14 21:35   ` Steven Rostedt
2011-03-14 21:46     ` Peter Zijlstra
2011-03-14 15:06 ` [RFC][PATCH 03/14] sched: Simplify build_sched_groups Peter Zijlstra
2011-03-14 15:06 ` [RFC][PATCH 04/14] sched: Change NODE sched_domain group creation Peter Zijlstra
2011-03-14 15:06 ` [RFC][PATCH 05/14] sched: Clean up some ALLNODES code Peter Zijlstra
2011-03-14 15:06 ` Peter Zijlstra [this message]
2011-03-14 15:06 ` [RFC][PATCH 07/14] sched: Simplify finding the lowest sched_domain Peter Zijlstra
2011-03-14 15:06 ` [RFC][PATCH 08/14] sched: Simplify sched_groups_power initialization Peter Zijlstra
2011-03-14 15:06 ` [RFC][PATCH 09/14] sched: Dynamically allocate sched_domain/sched_group data-structures Peter Zijlstra
2011-03-18  9:08   ` Bharata B Rao
2011-03-18  9:37     ` Peter Zijlstra
2011-03-19  1:23   ` Venkatesh Pallipadi
2011-03-25 21:06     ` Peter Zijlstra
2011-03-14 15:06 ` [RFC][PATCH 10/14] sched: Simplify the free path some Peter Zijlstra
2011-03-14 15:06 ` [RFC][PATCH 11/14] sched: Reduce some allocation pressure Peter Zijlstra
2011-03-14 15:06 ` [RFC][PATCH 12/14] sched: Simplify NODE/ALLNODES domain creation Peter Zijlstra
2011-03-14 15:06 ` [RFC][PATCH 13/14] sched: Remove nodemask allocation Peter Zijlstra
2011-03-14 15:06 ` [RFC][PATCH 14/14] sched: Remove some dead code Peter Zijlstra
2011-03-25 21:46 ` [RFC][PATCH 00/14] Rewrite sched_domain/sched_group creation Peter Zijlstra
2011-03-25 21:53   ` Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20110314152226.923003754@chello.nl \
    --to=a.p.zijlstra@chello.nl \
    --cc=andreas.herrmann3@amd.com \
    --cc=anton@au1.ibm.com \
    --cc=benh@kernel.crashing.org \
    --cc=efault@gmx.de \
    --cc=heiko.carstens@de.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=pjt@google.com \
    --cc=suresh.b.siddha@intel.com \
    --cc=tglx@linutronix.de \
    --cc=vatsa@linux.vnet.ibm.com \
    --cc=venki@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox