* [PATCH V3 1/6] sched: code cleanup - sd_power_saving_flags(), sd_balance_for_*_power()
2009-03-06 6:28 [PATCH V3 0/6] sched: Extend sched_mc/smt_power_savings framework Gautham R Shenoy
@ 2009-03-06 6:28 ` Gautham R Shenoy
2009-03-06 6:29 ` [PATCH V3 2/6] sched: Record the current active power savings level Gautham R Shenoy
` (4 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Gautham R Shenoy @ 2009-03-06 6:28 UTC (permalink / raw)
To: Vaidyanathan Srinivasan, Balbir Singh, Peter Zijlstra,
Ingo Molnar, Suresh Siddha
Cc: Dipankar Sarma, efault, andi, linux-kernel, Gautham R Shenoy,
Vaidyanathan Srinivasan
This is a code cleanup patch which combines the functions of
sd_balance_for_mc/package_power() and sd_power_saving_flags() into a single
function.
Signed-off-by: Gautham R Shenoy <ego@in.ibm.com>
Cc: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
---
include/linux/sched.h | 65 +++++++++++++++++++++++-----------------------
include/linux/topology.h | 6 +---
2 files changed, 35 insertions(+), 36 deletions(-)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 8981e52..73aea7b 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -794,34 +794,45 @@ enum powersavings_balance_level {
extern int sched_mc_power_savings, sched_smt_power_savings;
-static inline int sd_balance_for_mc_power(void)
-{
- if (sched_smt_power_savings)
- return SD_POWERSAVINGS_BALANCE;
-
- return 0;
-}
-
-static inline int sd_balance_for_package_power(void)
-{
- if (sched_mc_power_savings | sched_smt_power_savings)
- return SD_POWERSAVINGS_BALANCE;
+enum sched_domain_level {
+ SD_LV_NONE = 0,
+ SD_LV_SIBLING, /* Represents the THREADS domain */
+ SD_LV_MC, /* Represents the CORES domain */
+ SD_LV_CPU, /* Represents the PACKAGE domain */
+ SD_LV_NODE, /* Represents the NODES domain */
+ SD_LV_ALLNODES,
+ SD_LV_MAX
+};
- return 0;
-}
-/*
- * Optimise SD flags for power savings:
- * SD_BALANCE_NEWIDLE helps agressive task consolidation and power savings.
- * Keep default SD flags if sched_{smt,mc}_power_saving=0
+/**
+ * sd_power_savings_flags: Returns the flags specific to power-aware-load
+ * balancing for a given sched_domain level
+ *
+ * @level: The sched_domain level for which the the power-aware-load balancing
+ * flags need to be set.
+ *
+ * This function helps in setting the flags for power-aware load balancing for
+ * a given sched_domain.
+ * - SD_POWERSAVINGS_BALANCE tells the load-balancer that power-aware
+ * load balancing is applicable at this domain.
+ *
+ * - SD_BALANCE_NEWIDLE helps aggressive task consolidation and
+ * power-savings.
+ *
+ * For more information on power aware scheduling, see the comment before
+ * find_busiest_group() in kernel/sched.c
*/
-static inline int sd_power_saving_flags(void)
+static inline int sd_power_saving_flags(enum sched_domain_level level)
{
- if (sched_mc_power_savings | sched_smt_power_savings)
- return SD_BALANCE_NEWIDLE;
+ if (level == SD_LV_MC && !sched_smt_power_savings)
+ return 0;
+ if (level == SD_LV_CPU &&
+ !(sched_mc_power_savings || sched_smt_power_savings))
+ return 0;
- return 0;
+ return SD_POWERSAVINGS_BALANCE | SD_BALANCE_NEWIDLE;
}
struct sched_group {
@@ -847,16 +858,6 @@ static inline struct cpumask *sched_group_cpus(struct sched_group *sg)
return to_cpumask(sg->cpumask);
}
-enum sched_domain_level {
- SD_LV_NONE = 0,
- SD_LV_SIBLING,
- SD_LV_MC,
- SD_LV_CPU,
- SD_LV_NODE,
- SD_LV_ALLNODES,
- SD_LV_MAX
-};
-
struct sched_domain_attr {
int relax_domain_level;
};
diff --git a/include/linux/topology.h b/include/linux/topology.h
index e632d29..8097dce 100644
--- a/include/linux/topology.h
+++ b/include/linux/topology.h
@@ -125,8 +125,7 @@ int arch_update_cpu_topology(void);
| SD_WAKE_AFFINE \
| SD_WAKE_BALANCE \
| SD_SHARE_PKG_RESOURCES\
- | sd_balance_for_mc_power()\
- | sd_power_saving_flags(),\
+ | sd_power_saving_flags(SD_LV_MC),\
.last_balance = jiffies, \
.balance_interval = 1, \
}
@@ -151,8 +150,7 @@ int arch_update_cpu_topology(void);
| SD_BALANCE_FORK \
| SD_WAKE_AFFINE \
| SD_WAKE_BALANCE \
- | sd_balance_for_package_power()\
- | sd_power_saving_flags(),\
+ | sd_power_saving_flags(SD_LV_CPU),\
.last_balance = jiffies, \
.balance_interval = 1, \
}
^ permalink raw reply related [flat|nested] 7+ messages in thread* [PATCH V3 2/6] sched: Record the current active power savings level
2009-03-06 6:28 [PATCH V3 0/6] sched: Extend sched_mc/smt_power_savings framework Gautham R Shenoy
2009-03-06 6:28 ` [PATCH V3 1/6] sched: code cleanup - sd_power_saving_flags(), sd_balance_for_*_power() Gautham R Shenoy
@ 2009-03-06 6:29 ` Gautham R Shenoy
2009-03-06 6:29 ` [PATCH V3 3/6] sched: Add Comments at the beginning of find_busiest_group Gautham R Shenoy
` (3 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Gautham R Shenoy @ 2009-03-06 6:29 UTC (permalink / raw)
To: Vaidyanathan Srinivasan, Balbir Singh, Peter Zijlstra,
Ingo Molnar, Suresh Siddha
Cc: Dipankar Sarma, efault, andi, linux-kernel, Gautham R Shenoy,
Peter Zijlstra
The current active power savings level of a system is defined as the
maximum of the sched_mc_power_savings and the sched_smt_power_savings.
The decisions during power-aware loadbalancing, depend on this value.
Record this value in a read mostly global variable instead of having to
compute it everytime.
Signed-off-by: Gautham R Shenoy <ego@in.ibm.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
include/linux/sched.h | 1 +
kernel/sched.c | 8 ++++++--
kernel/sched_fair.c | 2 +-
3 files changed, 8 insertions(+), 3 deletions(-)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 73aea7b..8563698 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -793,6 +793,7 @@ enum powersavings_balance_level {
};
extern int sched_mc_power_savings, sched_smt_power_savings;
+extern enum powersavings_balance_level active_power_savings_level;
enum sched_domain_level {
SD_LV_NONE = 0,
diff --git a/kernel/sched.c b/kernel/sched.c
index 410eec4..4629bb1 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -3398,7 +3398,7 @@ out_balanced:
if (this == group_leader && group_leader != group_min) {
*imbalance = min_load_per_task;
- if (sched_mc_power_savings >= POWERSAVINGS_BALANCE_WAKEUP) {
+ if (active_power_savings_level >= POWERSAVINGS_BALANCE_WAKEUP) {
cpu_rq(this_cpu)->rd->sched_mc_preferred_wakeup_cpu =
cpumask_first(sched_group_cpus(group_leader));
}
@@ -3683,7 +3683,7 @@ redo:
!test_sd_parent(sd, SD_POWERSAVINGS_BALANCE))
return -1;
- if (sched_mc_power_savings < POWERSAVINGS_BALANCE_WAKEUP)
+ if (active_power_savings_level < POWERSAVINGS_BALANCE_WAKEUP)
return -1;
if (sd->nr_balance_failed++ < 2)
@@ -7206,6 +7206,8 @@ static void sched_domain_node_span(int node, struct cpumask *span)
#endif /* CONFIG_NUMA */
int sched_smt_power_savings = 0, sched_mc_power_savings = 0;
+/* Records the currently active power savings level */
+enum powersavings_balance_level __read_mostly active_power_savings_level;
/*
* The cpus mask in sched_group and sched_domain hangs off the end.
@@ -8040,6 +8042,8 @@ static ssize_t sched_power_savings_store(const char *buf, size_t count, int smt)
sched_smt_power_savings = level;
else
sched_mc_power_savings = level;
+ active_power_savings_level = max(sched_smt_power_savings,
+ sched_mc_power_savings);
arch_reinit_sched_domains();
diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index 0566f2a..a3583c6 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -1054,7 +1054,7 @@ static int wake_idle(int cpu, struct task_struct *p)
chosen_wakeup_cpu =
cpu_rq(this_cpu)->rd->sched_mc_preferred_wakeup_cpu;
- if (sched_mc_power_savings >= POWERSAVINGS_BALANCE_WAKEUP &&
+ if (active_power_savings_level >= POWERSAVINGS_BALANCE_WAKEUP &&
idle_cpu(cpu) && idle_cpu(this_cpu) &&
p->mm && !(p->flags & PF_KTHREAD) &&
cpu_isset(chosen_wakeup_cpu, p->cpus_allowed))
^ permalink raw reply related [flat|nested] 7+ messages in thread* [PATCH V3 3/6] sched: Add Comments at the beginning of find_busiest_group.
2009-03-06 6:28 [PATCH V3 0/6] sched: Extend sched_mc/smt_power_savings framework Gautham R Shenoy
2009-03-06 6:28 ` [PATCH V3 1/6] sched: code cleanup - sd_power_saving_flags(), sd_balance_for_*_power() Gautham R Shenoy
2009-03-06 6:29 ` [PATCH V3 2/6] sched: Record the current active power savings level Gautham R Shenoy
@ 2009-03-06 6:29 ` Gautham R Shenoy
2009-03-06 6:29 ` [PATCH V3 4/6] sched: Rename the variable sched_mc_preferred_wakeup_cpu Gautham R Shenoy
` (2 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Gautham R Shenoy @ 2009-03-06 6:29 UTC (permalink / raw)
To: Vaidyanathan Srinivasan, Balbir Singh, Peter Zijlstra,
Ingo Molnar, Suresh Siddha
Cc: Dipankar Sarma, efault, andi, linux-kernel, Gautham R Shenoy,
Peter Zijlstra, Ingo Molnar
Currently there are no comments pertaining to power-savings balance in
the function find_busiest_group. Add appropriate comments.
Signed-off-by: Gautham R Shenoy <ego@in.ibm.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Ingo Molnar <mingo@elte.hu>
---
kernel/sched.c | 17 +++++++++++++++++
1 files changed, 17 insertions(+), 0 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index 4629bb1..8648eb0 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -3090,6 +3090,23 @@ static int move_one_task(struct rq *this_rq, int this_cpu, struct rq *busiest,
* find_busiest_group finds and returns the busiest CPU group within the
* domain. It calculates and returns the amount of weighted load which
* should be moved to restore balance via the imbalance parameter.
+ *
+ * Power-savings-balance: Through the sysfs tunables sched_mc/smt_power_savings
+ * he user can opt for aggressive task consolidation as a means to save power.
+ * When this is activated, we would have the SD_POWERSAVINGS_BALANCE flag
+ * set for appropriate sched_domains,
+ *
+ * Within such sched_domains, find_busiest_group would try to identify
+ * a sched_group which can be freed-up and whose tasks can be migrated to
+ * a sibling group which has the capacity to accomodate the former's tasks.
+ * If such a "can-go-idle" sched_group does exist, then the sibling group
+ * which can accomodate it's tasks is returned as the busiest group.
+ *
+ * Furthermore, if the user opts for more aggressive power-aware load
+ * balancing through sched_smt/mc_power_savings = 2, i.e when the
+ * active_power_savings_level greater or equal to POWERSAVINGS_BALANCE_WAKEUP,
+ * find_busiest_group will also nominate the preferred CPU, on which the tasks
+ * should hence forth be woken up on, instead of bothering an idle-cpu.
*/
static struct sched_group *
find_busiest_group(struct sched_domain *sd, int this_cpu,
^ permalink raw reply related [flat|nested] 7+ messages in thread* [PATCH V3 4/6] sched: Rename the variable sched_mc_preferred_wakeup_cpu
2009-03-06 6:28 [PATCH V3 0/6] sched: Extend sched_mc/smt_power_savings framework Gautham R Shenoy
` (2 preceding siblings ...)
2009-03-06 6:29 ` [PATCH V3 3/6] sched: Add Comments at the beginning of find_busiest_group Gautham R Shenoy
@ 2009-03-06 6:29 ` Gautham R Shenoy
2009-03-06 6:29 ` [PATCH V3 5/6] sched: Arbitrate the nomination of preferred_wakeup_cpu Gautham R Shenoy
2009-03-06 6:29 ` [PATCH V3 6/6] sched: Fix sd_parent_degenerate for SD_POWERSAVINGS_BALANCE Gautham R Shenoy
5 siblings, 0 replies; 7+ messages in thread
From: Gautham R Shenoy @ 2009-03-06 6:29 UTC (permalink / raw)
To: Vaidyanathan Srinivasan, Balbir Singh, Peter Zijlstra,
Ingo Molnar, Suresh Siddha
Cc: Dipankar Sarma, efault, andi, linux-kernel, Gautham R Shenoy
sched_mc_ preferred_wakeup_cpu is currently used when the user seeks
power savings through aggressive task consolidation.
This is applicable for both sched_mc_power_savings as well as
sched_smt_power_savings. So rename sched_mc_preferred_wakeup_cpu to
preferred_wakeup_cpu.
Also fix the comment for preferred_wakeup_cpu.
Signed-off-by: Gautham R Shenoy <ego@in.ibm.com>
---
kernel/sched.c | 12 +++++++-----
kernel/sched_fair.c | 2 +-
2 files changed, 8 insertions(+), 6 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index 8648eb0..d57a882 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -515,11 +515,13 @@ struct root_domain {
#endif
#if defined(CONFIG_SCHED_MC) || defined(CONFIG_SCHED_SMT)
/*
- * Preferred wake up cpu nominated by sched_mc balance that will be
- * used when most cpus are idle in the system indicating overall very
- * low system utilisation. Triggered at POWERSAVINGS_BALANCE_WAKEUP(2)
+ * Preferred wake up cpu which is nominated by load balancer,
+ * is the CPU on which the tasks would be woken up, which
+ * otherwise would have woken up on an idle CPU even on a system
+ * with low-cpu-utilization.
+ * This is triggered at POWERSAVINGS_BALANCE_WAKEUP(2).
*/
- unsigned int sched_mc_preferred_wakeup_cpu;
+ unsigned int preferred_wakeup_cpu;
#endif
};
@@ -3416,7 +3418,7 @@ out_balanced:
if (this == group_leader && group_leader != group_min) {
*imbalance = min_load_per_task;
if (active_power_savings_level >= POWERSAVINGS_BALANCE_WAKEUP) {
- cpu_rq(this_cpu)->rd->sched_mc_preferred_wakeup_cpu =
+ cpu_rq(this_cpu)->rd->preferred_wakeup_cpu =
cpumask_first(sched_group_cpus(group_leader));
}
return group_min;
diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index a3583c6..03b1e3c 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -1052,7 +1052,7 @@ static int wake_idle(int cpu, struct task_struct *p)
this_cpu = smp_processor_id();
chosen_wakeup_cpu =
- cpu_rq(this_cpu)->rd->sched_mc_preferred_wakeup_cpu;
+ cpu_rq(this_cpu)->rd->preferred_wakeup_cpu;
if (active_power_savings_level >= POWERSAVINGS_BALANCE_WAKEUP &&
idle_cpu(cpu) && idle_cpu(this_cpu) &&
^ permalink raw reply related [flat|nested] 7+ messages in thread* [PATCH V3 5/6] sched: Arbitrate the nomination of preferred_wakeup_cpu
2009-03-06 6:28 [PATCH V3 0/6] sched: Extend sched_mc/smt_power_savings framework Gautham R Shenoy
` (3 preceding siblings ...)
2009-03-06 6:29 ` [PATCH V3 4/6] sched: Rename the variable sched_mc_preferred_wakeup_cpu Gautham R Shenoy
@ 2009-03-06 6:29 ` Gautham R Shenoy
2009-03-06 6:29 ` [PATCH V3 6/6] sched: Fix sd_parent_degenerate for SD_POWERSAVINGS_BALANCE Gautham R Shenoy
5 siblings, 0 replies; 7+ messages in thread
From: Gautham R Shenoy @ 2009-03-06 6:29 UTC (permalink / raw)
To: Vaidyanathan Srinivasan, Balbir Singh, Peter Zijlstra,
Ingo Molnar, Suresh Siddha
Cc: Dipankar Sarma, efault, andi, linux-kernel, Gautham R Shenoy
Currently for sched_mc/smt_power_savings = 2, we consolidate tasks
by having a preferred_wakeup_cpu which will be used for all the
further wake ups.
This preferred_wakeup_cpu is currently nominated by find_busiest_group()
while loadbalancing for sched_domains which has SD_POWERSAVINGS_BALANCE flag
set.
However, on systems which are multi-threaded and multi-core, we can
have multiple sched_domains in the same hierarchy with
SD_POWERSAVINGS_BALANCE flag set.
Currently we don't have any arbitration mechanism as to while load balancing
for which sched_domain in the hierarchy should find_busiest_group(sd)
nominate the preferred_wakeup_cpu. Hence can overwrite valid nominations
made previously thereby causing the preferred_wakup_cpu to ping-pong
thereby preventing us from effectively consolidating tasks.
Fix this by means of an arbitration algorithm, where in we nominate the
preferred_wakeup_cpu sched_domain in find_busiest_group() for a particular
sched_domain if the sched_domain:
- is the topmost power aware sched_domain.
OR
- contains the previously nominated preferred wake up cpu in it's span.
This will help to further fine tune the wake-up biasing logic by
identifying a partially busy core within a CPU package instead of
potentially waking up a completely idle core.
Signed-off-by: Gautham R Shenoy <ego@in.ibm.com>
---
kernel/sched.c | 45 +++++++++++++++++++++++++++++++++++++++++++--
1 files changed, 43 insertions(+), 2 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index d57a882..35e1651 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -522,6 +522,14 @@ struct root_domain {
* This is triggered at POWERSAVINGS_BALANCE_WAKEUP(2).
*/
unsigned int preferred_wakeup_cpu;
+
+ /*
+ * top_powersavings_sd_lvl records the level of the highest
+ * sched_domain that has the SD_POWERSAVINGS_BALANCE flag set.
+ *
+ * Used to arbitrate nomination of the preferred_wakeup_cpu.
+ */
+ enum sched_domain_level top_powersavings_sd_lvl;
#endif
};
@@ -3416,9 +3424,27 @@ out_balanced:
goto ret;
if (this == group_leader && group_leader != group_min) {
+ struct root_domain *my_rd = cpu_rq(this_cpu)->rd;
*imbalance = min_load_per_task;
- if (active_power_savings_level >= POWERSAVINGS_BALANCE_WAKEUP) {
- cpu_rq(this_cpu)->rd->preferred_wakeup_cpu =
+ /*
+ * To avoid overwriting of preferred_wakeup_cpu nominations
+ * while calling find_busiest_group() at various sched_domain
+ * levels, we define an arbitration mechanism wherein
+ * find_busiest_group() nominates a preferred_wakeup_cpu at
+ * the sched_domain sd if:
+ *
+ * - sd is the highest sched_domain in the hierarchy having the
+ * SD_POWERSAVINGS_BALANCE flag set.
+ *
+ * OR
+ *
+ * - sd contains the previously nominated preferred_wakeup_cpu
+ * in it's span.
+ */
+ if (sd->level == my_rd->top_powersavings_sd_lvl ||
+ cpu_isset(my_rd->preferred_wakeup_cpu,
+ *sched_domain_span(sd))) {
+ my_rd->preferred_wakeup_cpu =
cpumask_first(sched_group_cpus(group_leader));
}
return group_min;
@@ -7541,6 +7567,8 @@ static int __build_sched_domains(const struct cpumask *cpu_map,
struct root_domain *rd;
cpumask_var_t nodemask, this_sibling_map, this_core_map, send_covered,
tmpmask;
+ struct sched_domain *sd;
+
#ifdef CONFIG_NUMA
cpumask_var_t domainspan, covered, notcovered;
struct sched_group **sched_group_nodes = NULL;
@@ -7816,6 +7844,19 @@ static int __build_sched_domains(const struct cpumask *cpu_map,
err = 0;
+ rd->preferred_wakeup_cpu = UINT_MAX;
+ rd->top_powersavings_sd_lvl = SD_LV_NONE;
+
+ if (active_power_savings_level < POWERSAVINGS_BALANCE_WAKEUP)
+ goto free_tmpmask;
+
+ /* Record the level of the highest power-aware sched_domain */
+ for_each_domain(first_cpu(*cpu_map), sd) {
+ if (!(sd->flags & SD_POWERSAVINGS_BALANCE))
+ continue;
+ rd->top_powersavings_sd_lvl = sd->level;
+ }
+
free_tmpmask:
free_cpumask_var(tmpmask);
free_send_covered:
^ permalink raw reply related [flat|nested] 7+ messages in thread* [PATCH V3 6/6] sched: Fix sd_parent_degenerate for SD_POWERSAVINGS_BALANCE.
2009-03-06 6:28 [PATCH V3 0/6] sched: Extend sched_mc/smt_power_savings framework Gautham R Shenoy
` (4 preceding siblings ...)
2009-03-06 6:29 ` [PATCH V3 5/6] sched: Arbitrate the nomination of preferred_wakeup_cpu Gautham R Shenoy
@ 2009-03-06 6:29 ` Gautham R Shenoy
5 siblings, 0 replies; 7+ messages in thread
From: Gautham R Shenoy @ 2009-03-06 6:29 UTC (permalink / raw)
To: Vaidyanathan Srinivasan, Balbir Singh, Peter Zijlstra,
Ingo Molnar, Suresh Siddha
Cc: Dipankar Sarma, efault, andi, linux-kernel, Gautham R Shenoy,
Vaidyanathan Srinivasan
Currently a sched_domain having a single group can be prevented from getting
degenerated if it contains a SD_POWERSAVINGS_BALANCE flag. But since it has
only one group, it won't have any scope for performing powersavings balance as
it does not have a sibling group to pull from.
Apart from not provide any powersavings, it also fails to participate
in normal load-balancing.
Fix this by allowing such a sched_domain to degenerate and pass on the
responsibility of performing the POWERSAVINGS_BALANCE to it's parent domain.
Signed-off-by: Gautham R Shenoy <ego@in.ibm.com>
Cc: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
---
kernel/sched.c | 14 ++++++++++++++
1 files changed, 14 insertions(+), 0 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index 35e1651..d27b8e3 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -6970,6 +6970,20 @@ sd_parent_degenerate(struct sched_domain *sd, struct sched_domain *parent)
SD_SHARE_PKG_RESOURCES);
if (nr_node_ids == 1)
pflags &= ~SD_SERIALIZE;
+
+ /*
+ * If the only flag that is preventing us from degenerating
+ * a domain with a single group is SD_POWERSAVINGS_BALANCE
+ * check if it can be transferred to the new parent,
+ * and degenerate this domain. With a single
+ * group, it anyway can't contribute to power-aware load
+ * balancing.
+ */
+ if (pflags & SD_POWERSAVINGS_BALANCE && parent->parent) {
+ pflags &= ~SD_POWERSAVINGS_BALANCE;
+ parent->parent->flags |=
+ sd_power_saving_flags(parent->level);
+ }
}
if (~cflags & pflags)
return 0;
^ permalink raw reply related [flat|nested] 7+ messages in thread