public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [Patch] sched: fix group power for allnodes_domains
@ 2006-02-04  1:18 Siddha, Suresh B
  2006-02-06 11:46 ` Nick Piggin
  0 siblings, 1 reply; 2+ messages in thread
From: Siddha, Suresh B @ 2006-02-04  1:18 UTC (permalink / raw)
  To: nickpiggin, mingo, hawkes, steiner; +Cc: linux-kernel

Current sched groups power calculation for allnodes_domains is wrong. We should
really be using cumulative power of the physical packages in that group
(similar to the calculation in node_domains)

Appended patch fixes this issue. This request is for inclusion in -mm and hence
the patch is against 2.6.16-rc1-mm5(as multi core sched patch in -mm was 
touching this area)

Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>

--- linux/kernel/sched.c	2006-02-01 14:27:52.413687032 -0800
+++ linux-core/kernel/sched.c	2006-02-01 14:25:57.734120976 -0800
@@ -5705,6 +5705,32 @@ static int cpu_to_allnodes_group(int cpu
 {
 	return cpu_to_node(cpu);
 }
+static void init_numa_sched_groups_power(struct sched_group *group_head)
+{
+	struct sched_group *sg = group_head;
+	int j;
+
+	if (!sg)
+		return;
+next_sg:
+	for_each_cpu_mask(j, sg->cpumask) {
+		struct sched_domain *sd;
+
+		sd = &per_cpu(phys_domains, j);
+		if (j != first_cpu(sd->groups->cpumask)) {
+			/*
+			 * Only add "power" once for each
+			 * physical package.
+			 */
+			continue;
+		}
+
+		sg->cpu_power += sd->groups->cpu_power;
+	}
+	sg = sg->next;
+	if (sg != group_head)
+		goto next_sg;
+}
 #endif
 
 /*
@@ -5950,43 +5976,13 @@ void build_sched_domains(const cpumask_t
 				(cpus_weight(sd->groups->cpumask)-1) / 10;
 		sd->groups->cpu_power = power;
 #endif
-
-#ifdef CONFIG_NUMA
-		sd = &per_cpu(allnodes_domains, i);
-		if (sd->groups) {
-			power = SCHED_LOAD_SCALE + SCHED_LOAD_SCALE *
-				(cpus_weight(sd->groups->cpumask)-1) / 10;
-			sd->groups->cpu_power = power;
-		}
-#endif
 	}
 
 #ifdef CONFIG_NUMA
-	for (i = 0; i < MAX_NUMNODES; i++) {
-		struct sched_group *sg = sched_group_nodes[i];
-		int j;
+	for (i = 0; i < MAX_NUMNODES; i++)
+		init_numa_sched_groups_power(sched_group_nodes[i]);
 
-		if (sg == NULL)
-			continue;
-next_sg:
-		for_each_cpu_mask(j, sg->cpumask) {
-			struct sched_domain *sd;
-
-			sd = &per_cpu(phys_domains, j);
-			if (j != first_cpu(sd->groups->cpumask)) {
-				/*
-				 * Only add "power" once for each
-				 * physical package.
-				 */
-				continue;
-			}
-
-			sg->cpu_power += sd->groups->cpu_power;
-		}
-		sg = sg->next;
-		if (sg != sched_group_nodes[i])
-			goto next_sg;
-	}
+	init_numa_sched_groups_power(sched_group_allnodes);
 #endif
 
 	/* Attach the domains */


^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [Patch] sched: fix group power for allnodes_domains
  2006-02-04  1:18 [Patch] sched: fix group power for allnodes_domains Siddha, Suresh B
@ 2006-02-06 11:46 ` Nick Piggin
  0 siblings, 0 replies; 2+ messages in thread
From: Nick Piggin @ 2006-02-06 11:46 UTC (permalink / raw)
  To: Siddha, Suresh B; +Cc: mingo, hawkes, steiner, linux-kernel

Siddha, Suresh B wrote:
> Current sched groups power calculation for allnodes_domains is wrong. We should
> really be using cumulative power of the physical packages in that group
> (similar to the calculation in node_domains)
> 
> Appended patch fixes this issue. This request is for inclusion in -mm and hence
> the patch is against 2.6.16-rc1-mm5(as multi core sched patch in -mm was 
> touching this area)
> 
> Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>

Good catch,

Acked-by: Nick Piggin <npiggin@suse.de>

-- 
SUSE Labs, Novell Inc.
Send instant messages to your online friends http://au.messenger.yahoo.com 

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2006-02-06 11:46 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-02-04  1:18 [Patch] sched: fix group power for allnodes_domains Siddha, Suresh B
2006-02-06 11:46 ` Nick Piggin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox