linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH v2 0/3] sched: simplify the select_task_rq_fair()
@ 2013-01-17  5:46 Michael Wang
  2013-01-17  5:47 ` [RFC PATCH v2 1/3] sched: schedule balance map foundation Michael Wang
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Michael Wang @ 2013-01-17  5:46 UTC (permalink / raw)
  To: LKML
  Cc: Ingo Molnar, Peter Zijlstra, Paul Turner, Mike Galbraith,
	Andrew Morton, Tejun Heo, Nikunj A. Dadhania, Namhyung Kim,
	Ram Pai

v2 change log:
	Fix code style issues.
	Fix BUG of NULL affine_sd.
	Add the support for NUMA domain.
	Other small fix according to the old logical.
	Provide more test data with finer granularity.

This patch set is trying to simplify the select_task_rq_fair() with
schedule balance map.

After get rid of the complex code and reorganize the logical, pgbench show
the improvement, more the clients, bigger the improvement.

				Prev:		Post:

	| db_size | clients |	| tps  |	|  tps  | 
	+---------+---------+   +------+        +-------+
	| 22 MB   |       1 |   |10794 |        | 10862 |
	| 22 MB   |       2 |   |21567 |        | 21711 |
	| 22 MB   |       4 |   |41621 |        | 42609 |
	| 22 MB   |       8 |   |53883 |        | 58184 |	+7.98%
	| 22 MB   |      12 |   |50818 |        | 53464 |	+5.21%
	| 22 MB   |      16 |   |50463 |        | 55034 |	+9.06%
	| 22 MB   |      24 |   |46698 |        | 53702 |	+15.00%
	| 22 MB   |      32 |   |43404 |        | 54030 |	+24.48%
	| 7484 MB |       1 |   | 7974 |        |  8431 |
	| 7484 MB |       2 |   |19341 |        | 19509 |
	| 7484 MB |       4 |   |36808 |        | 38038 |
	| 7484 MB |       8 |   |47821 |        | 50377 |	+5.34%
	| 7484 MB |      12 |   |45913 |        | 49128 |	+7.00%
	| 7484 MB |      16 |   |46478 |        | 49670 |	+6.87%
	| 7484 MB |      24 |   |42793 |        | 48461 |	+13.25%
	| 7484 MB |      32 |   |36329 |        | 48436 |	+33.33%
	| 15 GB   |       1 |   | 7636 |        |  7852 |
	| 15 GB   |       2 |   |19195 |        | 19369 |
	| 15 GB   |       4 |   |35975 |        | 37323 |
	| 15 GB   |       8 |   |47919 |        | 50246 |	+4.86%
	| 15 GB   |      12 |   |45397 |        | 48608 |	+7.07%
	| 15 GB   |      16 |   |45926 |        | 49192 |	+7.11%
	| 15 GB   |      24 |   |42184 |        | 48007 |	+13.80%
	| 15 GB   |      32 |   |35983 |        | 47955 |	+33.27%

Please check the patch for more details about schedule balance map.

Support the NUMA domain but not tested.
Support the rebuild of domain but not tested.

Comments are very welcomed.

BTW:
	Compared with 3.7.0-rc6, 3.8.0-rc3 show big improvement with few
	clients, but get some damage with many clients, whatever, this patch
	set will help both of them to gain better performance.

Test with:
	12 cpu X86 server and linux-next 3.8.0-rc3.

Michael Wang (3):
	[RFC PATCH v2 1/3] sched: schedule balance map foundation
	[RFC PATCH v2 2/3] sched: build schedule balance map
	[RFC PATCH v2 3/3] sched: simplify select_task_rq_fair() with schedule balance map

Signed-off-by: Michael Wang <wangyun@linux.vnet.ibm.com>
---
 b/kernel/sched/core.c  |   44 ++++++++++++++++
 b/kernel/sched/fair.c  |  131 +++++++++++++++++++++++++------------------------
 b/kernel/sched/sched.h |   14 +++++
 kernel/sched/core.c    |   70 ++++++++++++++++++++++++++
 4 files changed, 197 insertions(+), 62 deletions(-)


^ permalink raw reply	[flat|nested] 4+ messages in thread

* [RFC PATCH v2 1/3] sched: schedule balance map foundation
  2013-01-17  5:46 [RFC PATCH v2 0/3] sched: simplify the select_task_rq_fair() Michael Wang
@ 2013-01-17  5:47 ` Michael Wang
  2013-01-17  5:48 ` [RFC PATCH v2 2/3] sched: build schedule balance map Michael Wang
  2013-01-17  5:49 ` [RFC PATCH v2 3/3] sched: simplify select_task_rq_fair() with " Michael Wang
  2 siblings, 0 replies; 4+ messages in thread
From: Michael Wang @ 2013-01-17  5:47 UTC (permalink / raw)
  To: LKML
  Cc: Ingo Molnar, Peter Zijlstra, Paul Turner, Mike Galbraith,
	Andrew Morton, Tejun Heo, Nikunj A. Dadhania, Namhyung Kim,
	Ram Pai

In order to get rid of the complex code in select_task_rq_fair(),
approach to directly get sd on each level with proper flag is
required.

Schedule balance map is the solution, which record the sd according
to it's flag and level.

For example, cpu_sbm->sd[wake][l] will locate the sd of cpu which
support wake up on level l.

This patch contain the foundation of schedule balance map in order
to serve the follow patches.

Signed-off-by: Michael Wang <wangyun@linux.vnet.ibm.com>
---
 kernel/sched/core.c  |   44 ++++++++++++++++++++++++++++++++++++++++++++
 kernel/sched/sched.h |   14 ++++++++++++++
 2 files changed, 58 insertions(+), 0 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 257002c..092c801 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5575,6 +5575,9 @@ static void update_top_cache_domain(int cpu)
 	per_cpu(sd_llc_id, cpu) = id;
 }
 
+static int sbm_max_level;
+DEFINE_PER_CPU_SHARED_ALIGNED(struct sched_balance_map, sbm_array);
+
 /*
  * Attach the domain 'sd' to 'cpu' as its base domain. Callers must
  * hold the hotplug lock.
@@ -6037,6 +6040,46 @@ static struct sched_domain_topology_level default_topology[] = {
 
 static struct sched_domain_topology_level *sched_domain_topology = default_topology;
 
+static void sched_init_sbm(void)
+{
+	size_t size;
+	int cpu, type, node;
+	struct sched_balance_map *sbm;
+	struct sched_domain_topology_level *tl;
+
+	/*
+	 * Inelegant method, any good idea?
+	 */
+	for (tl = sched_domain_topology; tl->init; tl++, sbm_max_level++)
+		;
+
+	for_each_possible_cpu(cpu) {
+		sbm = &per_cpu(sbm_array, cpu);
+		node = cpu_to_node(cpu);
+		size = sizeof(struct sched_domain *) * sbm_max_level;
+
+		for (type = 0; type < SBM_MAX_TYPE; type++) {
+			sbm->sd[type] = kmalloc_node(size, GFP_KERNEL, node);
+			WARN_ON(!sbm->sd[type]);
+			if (!sbm->sd[type])
+				goto failed;
+		}
+	}
+
+	return;
+
+failed:
+	for_each_possible_cpu(cpu) {
+		sbm = &per_cpu(sbm_array, cpu);
+
+		for (type = 0; type < SBM_MAX_TYPE; type++)
+			kfree(sbm->sd[type]);
+	}
+
+	/* prevent further work */
+	sbm_max_level = 0;
+}
+
 #ifdef CONFIG_NUMA
 
 static int sched_domains_numa_levels;
@@ -6765,6 +6808,7 @@ void __init sched_init_smp(void)
 	alloc_cpumask_var(&fallback_doms, GFP_KERNEL);
 
 	sched_init_numa();
+	sched_init_sbm();
 
 	get_online_cpus();
 	mutex_lock(&sched_domains_mutex);
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index fc88644..d060913 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -349,6 +349,19 @@ struct root_domain {
 
 extern struct root_domain def_root_domain;
 
+enum {
+	SBM_EXEC_TYPE,
+	SBM_FORK_TYPE,
+	SBM_WAKE_TYPE,
+	SBM_MAX_TYPE
+};
+
+struct sched_balance_map {
+	struct sched_domain **sd[SBM_MAX_TYPE];
+	int top_level[SBM_MAX_TYPE];
+	struct sched_domain *affine_map[NR_CPUS];
+};
+
 #endif /* CONFIG_SMP */
 
 /*
@@ -416,6 +429,7 @@ struct rq {
 #ifdef CONFIG_SMP
 	struct root_domain *rd;
 	struct sched_domain *sd;
+	struct sched_balance_map *sbm;
 
 	unsigned long cpu_power;
 
-- 
1.7.4.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [RFC PATCH v2 2/3] sched: build schedule balance map
  2013-01-17  5:46 [RFC PATCH v2 0/3] sched: simplify the select_task_rq_fair() Michael Wang
  2013-01-17  5:47 ` [RFC PATCH v2 1/3] sched: schedule balance map foundation Michael Wang
@ 2013-01-17  5:48 ` Michael Wang
  2013-01-17  5:49 ` [RFC PATCH v2 3/3] sched: simplify select_task_rq_fair() with " Michael Wang
  2 siblings, 0 replies; 4+ messages in thread
From: Michael Wang @ 2013-01-17  5:48 UTC (permalink / raw)
  To: LKML
  Cc: Ingo Molnar, Peter Zijlstra, Paul Turner, Mike Galbraith,
	Andrew Morton, Tejun Heo, Nikunj A. Dadhania, Namhyung Kim,
	Ram Pai

This patch will build schedule balance map as designed, now
cpu_sbm->sd[f][l] can directly locate the sd of cpu on level 'l'
with flag 'f' supported.

In order to quickly locate the lower sd while changing the base cpu,
the level with empty sd in map will be filled with the lower sd.

Signed-off-by: Michael Wang <wangyun@linux.vnet.ibm.com>
---
 kernel/sched/core.c |   70 +++++++++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 70 insertions(+), 0 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 092c801..0c63303 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5578,6 +5578,62 @@ static void update_top_cache_domain(int cpu)
 static int sbm_max_level;
 DEFINE_PER_CPU_SHARED_ALIGNED(struct sched_balance_map, sbm_array);
 
+static void build_sched_balance_map(int cpu)
+{
+	struct sched_balance_map *sbm = &per_cpu(sbm_array, cpu);
+	struct sched_domain *sd = cpu_rq(cpu)->sd;
+	struct sched_domain *top_sd = NULL;
+	int i, type, level = 0;
+
+	memset(sbm->top_level, 0, sizeof((*sbm).top_level));
+	memset(sbm->affine_map, 0, sizeof((*sbm).affine_map));
+	for (type = 0; type < SBM_MAX_TYPE; type++) {
+		memset(sbm->sd[type], 0,
+			sizeof(struct sched_domain *) * sbm_max_level);
+	}
+
+	while (sd) {
+		if (!(sd->flags & SD_LOAD_BALANCE))
+			continue;
+
+		if (sd->flags & SD_BALANCE_EXEC) {
+			sbm->top_level[SBM_EXEC_TYPE] = sd->level;
+			sbm->sd[SBM_EXEC_TYPE][sd->level] = sd;
+		}
+
+		if (sd->flags & SD_BALANCE_FORK) {
+			sbm->top_level[SBM_FORK_TYPE] = sd->level;
+			sbm->sd[SBM_FORK_TYPE][sd->level] = sd;
+		}
+
+		if (sd->flags & SD_BALANCE_WAKE) {
+			sbm->top_level[SBM_WAKE_TYPE] = sd->level;
+			sbm->sd[SBM_WAKE_TYPE][sd->level] = sd;
+		}
+
+		if (sd->flags & SD_WAKE_AFFINE) {
+			for_each_cpu(i, sched_domain_span(sd)) {
+				if (!sbm->affine_map[i])
+					sbm->affine_map[i] = sd;
+			}
+		}
+
+		sd = sd->parent;
+	}
+
+	/*
+	 * fill the hole to get lower level sd easily.
+	 */
+	for (type = 0; type < SBM_MAX_TYPE; type++) {
+		level = sbm->top_level[type];
+		top_sd = sbm->sd[type][level];
+		if ((++level != sbm_max_level) && top_sd) {
+			for (; level < sbm_max_level; level++)
+				sbm->sd[type][level] = top_sd;
+		}
+	}
+}
+
 /*
  * Attach the domain 'sd' to 'cpu' as its base domain. Callers must
  * hold the hotplug lock.
@@ -5587,6 +5643,9 @@ cpu_attach_domain(struct sched_domain *sd, struct root_domain *rd, int cpu)
 {
 	struct rq *rq = cpu_rq(cpu);
 	struct sched_domain *tmp;
+	struct sched_balance_map *sbm = &per_cpu(sbm_array, cpu);
+
+	rcu_assign_pointer(rq->sbm, NULL);
 
 	/* Remove the sched domains which do not contribute to scheduling. */
 	for (tmp = sd; tmp; ) {
@@ -5619,6 +5678,17 @@ cpu_attach_domain(struct sched_domain *sd, struct root_domain *rd, int cpu)
 	destroy_sched_domains(tmp, cpu);
 
 	update_top_cache_domain(cpu);
+
+	/* disable sbm if initialization failed or sd detached. */
+	if (!sbm_max_level || !sd)
+		return;
+
+	/*
+	 * synchronize_rcu() is unnecessary here since
+	 * destroy_sched_domains() already do the work.
+	 */
+	build_sched_balance_map(cpu);
+	rcu_assign_pointer(rq->sbm, sbm);
 }
 
 /* cpus with isolated domains */
-- 
1.7.4.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [RFC PATCH v2 3/3] sched: simplify select_task_rq_fair() with schedule balance map
  2013-01-17  5:46 [RFC PATCH v2 0/3] sched: simplify the select_task_rq_fair() Michael Wang
  2013-01-17  5:47 ` [RFC PATCH v2 1/3] sched: schedule balance map foundation Michael Wang
  2013-01-17  5:48 ` [RFC PATCH v2 2/3] sched: build schedule balance map Michael Wang
@ 2013-01-17  5:49 ` Michael Wang
  2 siblings, 0 replies; 4+ messages in thread
From: Michael Wang @ 2013-01-17  5:49 UTC (permalink / raw)
  To: LKML
  Cc: Ingo Molnar, Peter Zijlstra, Paul Turner, Mike Galbraith,
	Andrew Morton, Tejun Heo, Nikunj A. Dadhania, Namhyung Kim,
	Ram Pai

Since schedule balance map provide the approach to get proper sd directly,
simplify the code of select_task_rq_fair() is possible.

The new code is designed to reserve most of the old logical, but get rid
of those 'for' by using the schedule balance map to locate proper sd
directly.

Signed-off-by: Michael Wang <wangyun@linux.vnet.ibm.com>
---
 kernel/sched/fair.c |  131 +++++++++++++++++++++++++++------------------------
 1 files changed, 69 insertions(+), 62 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 5eea870..d600708 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3302,100 +3302,107 @@ done:
 }
 
 /*
- * sched_balance_self: balance the current task (running on cpu) in domains
- * that have the 'flag' flag set. In practice, this is SD_BALANCE_FORK and
- * SD_BALANCE_EXEC.
+ * select_task_rq_fair()
+ *		select a proper cpu for task to run.
  *
- * Balance, ie. select the least loaded group.
- *
- * Returns the target CPU number, or the same CPU if no balancing is needed.
- *
- * preempt must be disabled.
+ *	p		-- the task we are going to select cpu for
+ *	sd_flag		-- indicate the context, WAKE, EXEC or FORK.
+ *	wake_flag	-- we only care about WF_SYNC currently
  */
 static int
 select_task_rq_fair(struct task_struct *p, int sd_flag, int wake_flags)
 {
-	struct sched_domain *tmp, *affine_sd = NULL, *sd = NULL;
+	struct sched_domain *sd = NULL;
 	int cpu = smp_processor_id();
 	int prev_cpu = task_cpu(p);
 	int new_cpu = cpu;
-	int want_affine = 0;
 	int sync = wake_flags & WF_SYNC;
+	struct sched_balance_map *sbm = NULL;
+	int type = 0;
 
 	if (p->nr_cpus_allowed == 1)
 		return prev_cpu;
 
-	if (sd_flag & SD_BALANCE_WAKE) {
-		if (cpumask_test_cpu(cpu, tsk_cpus_allowed(p)))
-			want_affine = 1;
-		new_cpu = prev_cpu;
-	}
+	if (sd_flag & SD_BALANCE_EXEC)
+		type = SBM_EXEC_TYPE;
+	else if (sd_flag & SD_BALANCE_FORK)
+		type = SBM_FORK_TYPE;
+	else if (sd_flag & SD_BALANCE_WAKE)
+		type = SBM_WAKE_TYPE;
 
 	rcu_read_lock();
-	for_each_domain(cpu, tmp) {
-		if (!(tmp->flags & SD_LOAD_BALANCE))
-			continue;
 
+	sbm = cpu_rq(cpu)->sbm;
+	if (!sbm)
+		goto unlock;
+
+	if (sd_flag & SD_BALANCE_WAKE) {
 		/*
-		 * If both cpu and prev_cpu are part of this domain,
-		 * cpu is a valid SD_WAKE_AFFINE target.
+		 * Tasks to be waked is special, memory it relied on
+		 * may has already been cached on prev_cpu, and usually
+		 * they require low latency.
+		 *
+		 * So firstly try to locate an idle cpu shared the cache
+		 * with prev_cpu, it has the chance to break the load
+		 * balance, fortunately, select_idle_sibling() will search
+		 * from top to bottom, which help to reduce the chance in
+		 * some cases.
 		 */
-		if (want_affine && (tmp->flags & SD_WAKE_AFFINE) &&
-		    cpumask_test_cpu(prev_cpu, sched_domain_span(tmp))) {
-			affine_sd = tmp;
-			break;
-		}
+		new_cpu = select_idle_sibling(p, prev_cpu);
+		if (idle_cpu(new_cpu))
+			goto unlock;
 
-		if (tmp->flags & sd_flag)
-			sd = tmp;
-	}
+		/*
+		 * No idle cpu could be found in the topology of prev_cpu,
+		 * before jump into the slow balance_path, try search again
+		 * in the topology of current cpu if it is the affine of
+		 * prev_cpu.
+		 */
+		if (!sbm->affine_map[prev_cpu] ||
+				!cpumask_test_cpu(cpu, tsk_cpus_allowed(p)))
+			goto balance_path;
 
-	if (affine_sd) {
-		if (cpu != prev_cpu && wake_affine(affine_sd, p, sync))
-			prev_cpu = cpu;
+		new_cpu = select_idle_sibling(p, cpu);
+		if (!idle_cpu(new_cpu))
+			goto balance_path;
 
-		new_cpu = select_idle_sibling(p, prev_cpu);
-		goto unlock;
+		/*
+		 * Invoke wake_affine() finally since it is no doubt a
+		 * performance killer.
+		 */
+		if (wake_affine(sbm->affine_map[prev_cpu], p, sync))
+			goto unlock;
 	}
 
+balance_path:
+	new_cpu = (sd_flag & SD_BALANCE_WAKE) ? prev_cpu : cpu;
+	sd = sbm->sd[type][sbm->top_level[type]];
+
 	while (sd) {
 		int load_idx = sd->forkexec_idx;
-		struct sched_group *group;
-		int weight;
-
-		if (!(sd->flags & sd_flag)) {
-			sd = sd->child;
-			continue;
-		}
+		struct sched_group *sg = NULL;
 
 		if (sd_flag & SD_BALANCE_WAKE)
 			load_idx = sd->wake_idx;
 
-		group = find_idlest_group(sd, p, cpu, load_idx);
-		if (!group) {
-			sd = sd->child;
-			continue;
-		}
+		sg = find_idlest_group(sd, p, cpu, load_idx);
+		if (!sg)
+			goto next_sd;
 
-		new_cpu = find_idlest_cpu(group, p, cpu);
-		if (new_cpu == -1 || new_cpu == cpu) {
-			/* Now try balancing at a lower domain level of cpu */
-			sd = sd->child;
-			continue;
-		}
+		new_cpu = find_idlest_cpu(sg, p, cpu);
+		if (new_cpu != -1)
+			cpu = new_cpu;
+next_sd:
+		if (!sd->level)
+			break;
 
-		/* Now try balancing at a lower domain level of new_cpu */
-		cpu = new_cpu;
-		weight = sd->span_weight;
-		sd = NULL;
-		for_each_domain(cpu, tmp) {
-			if (weight <= tmp->span_weight)
-				break;
-			if (tmp->flags & sd_flag)
-				sd = tmp;
-		}
-		/* while loop will break here if sd == NULL */
+		sbm = cpu_rq(cpu)->sbm;
+		if (!sbm)
+			break;
+
+		sd = sbm->sd[type][sd->level - 1];
 	}
+
 unlock:
 	rcu_read_unlock();
 
-- 
1.7.4.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2013-01-17  5:49 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-01-17  5:46 [RFC PATCH v2 0/3] sched: simplify the select_task_rq_fair() Michael Wang
2013-01-17  5:47 ` [RFC PATCH v2 1/3] sched: schedule balance map foundation Michael Wang
2013-01-17  5:48 ` [RFC PATCH v2 2/3] sched: build schedule balance map Michael Wang
2013-01-17  5:49 ` [RFC PATCH v2 3/3] sched: simplify select_task_rq_fair() with " Michael Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).