From: Leo Yan <leo.yan@linaro.org>
To: Morten Rasmussen <morten.rasmussen@arm.com>
Cc: peterz@infradead.org, mingo@redhat.com,
vincent.guittot@linaro.org, daniel.lezcano@linaro.org,
Dietmar Eggemann <Dietmar.Eggemann@arm.com>,
yuyang.du@intel.com, mturquette@baylibre.com, rjw@rjwysocki.net,
Juri Lelli <Juri.Lelli@arm.com>,
sgurrappadi@nvidia.com, pang.xunlei@zte.com.cn,
linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org
Subject: Re: [RFCv5 PATCH 22/46] sched: Calculate energy consumption of sched_group
Date: Thu, 3 Sep 2015 01:19:23 +0800 [thread overview]
Message-ID: <20150902171923.GB12510@leoy-linaro> (raw)
In-Reply-To: <1436293469-25707-23-git-send-email-morten.rasmussen@arm.com>
On Tue, Jul 07, 2015 at 07:24:05PM +0100, Morten Rasmussen wrote:
> For energy-aware load-balancing decisions it is necessary to know the
> energy consumption estimates of groups of cpus. This patch introduces a
> basic function, sched_group_energy(), which estimates the energy
> consumption of the cpus in the group and any resources shared by the
> members of the group.
>
> NOTE: The function has five levels of identation and breaks the 80
> character limit. Refactoring is necessary.
>
> cc: Ingo Molnar <mingo@redhat.com>
> cc: Peter Zijlstra <peterz@infradead.org>
>
> Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
> ---
> kernel/sched/fair.c | 146 ++++++++++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 146 insertions(+)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 78d3081..bd0be9d 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -4846,6 +4846,152 @@ static inline bool energy_aware(void)
> return sched_feat(ENERGY_AWARE);
> }
>
> +/*
> + * cpu_norm_usage() returns the cpu usage relative to a specific capacity,
> + * i.e. it's busy ratio, in the range [0..SCHED_LOAD_SCALE] which is useful for
> + * energy calculations. Using the scale-invariant usage returned by
> + * get_cpu_usage() and approximating scale-invariant usage by:
> + *
> + * usage ~ (curr_freq/max_freq)*1024 * capacity_orig/1024 * running_time/time
> + *
> + * the normalized usage can be found using the specific capacity.
> + *
> + * capacity = capacity_orig * curr_freq/max_freq
> + *
> + * norm_usage = running_time/time ~ usage/capacity
> + */
> +static unsigned long cpu_norm_usage(int cpu, unsigned long capacity)
> +{
> + int usage = __get_cpu_usage(cpu);
> +
> + if (usage >= capacity)
> + return SCHED_CAPACITY_SCALE;
> +
> + return (usage << SCHED_CAPACITY_SHIFT)/capacity;
> +}
> +
> +static unsigned long group_max_usage(struct sched_group *sg)
> +{
> + int i;
> + unsigned long max_usage = 0;
> +
> + for_each_cpu(i, sched_group_cpus(sg))
> + max_usage = max(max_usage, get_cpu_usage(i));
> +
> + return max_usage;
> +}
> +
> +/*
> + * group_norm_usage() returns the approximated group usage relative to it's
> + * current capacity (busy ratio) in the range [0..SCHED_LOAD_SCALE] for use in
> + * energy calculations. Since task executions may or may not overlap in time in
> + * the group the true normalized usage is between max(cpu_norm_usage(i)) and
> + * sum(cpu_norm_usage(i)) when iterating over all cpus in the group, i. The
> + * latter is used as the estimate as it leads to a more pessimistic energy
> + * estimate (more busy).
> + */
> +static unsigned long group_norm_usage(struct sched_group *sg, int cap_idx)
> +{
> + int i;
> + unsigned long usage_sum = 0;
> + unsigned long capacity = sg->sge->cap_states[cap_idx].cap;
> +
> + for_each_cpu(i, sched_group_cpus(sg))
> + usage_sum += cpu_norm_usage(i, capacity);
> +
> + if (usage_sum > SCHED_CAPACITY_SCALE)
> + return SCHED_CAPACITY_SCALE;
> + return usage_sum;
> +}
> +
> +static int find_new_capacity(struct sched_group *sg,
> + struct sched_group_energy *sge)
> +{
> + int idx;
> + unsigned long util = group_max_usage(sg);
> +
> + for (idx = 0; idx < sge->nr_cap_states; idx++) {
> + if (sge->cap_states[idx].cap >= util)
> + return idx;
> + }
> +
> + return idx;
> +}
> +
> +/*
> + * sched_group_energy(): Returns absolute energy consumption of cpus belonging
> + * to the sched_group including shared resources shared only by members of the
> + * group. Iterates over all cpus in the hierarchy below the sched_group starting
> + * from the bottom working it's way up before going to the next cpu until all
> + * cpus are covered at all levels. The current implementation is likely to
> + * gather the same usage statistics multiple times. This can probably be done in
> + * a faster but more complex way.
> + */
> +static unsigned int sched_group_energy(struct sched_group *sg_top)
> +{
> + struct sched_domain *sd;
> + int cpu, total_energy = 0;
> + struct cpumask visit_cpus;
> + struct sched_group *sg;
> +
> + WARN_ON(!sg_top->sge);
> +
> + cpumask_copy(&visit_cpus, sched_group_cpus(sg_top));
> +
> + while (!cpumask_empty(&visit_cpus)) {
> + struct sched_group *sg_shared_cap = NULL;
> +
> + cpu = cpumask_first(&visit_cpus);
> +
> + /*
> + * Is the group utilization affected by cpus outside this
> + * sched_group?
> + */
> + sd = highest_flag_domain(cpu, SD_SHARE_CAP_STATES);
> + if (sd && sd->parent)
> + sg_shared_cap = sd->parent->groups;
If the sched domain is already the highest level, should directly use
its group to calculate shared capacity? so the code like below:
if (sd && sd->parent)
sg_shared_cap = sd->parent->groups;
else if (sd && !sd->parent)
sg_shared_cap = sd->groups;
> +
> + for_each_domain(cpu, sd) {
> + sg = sd->groups;
> +
> + /* Has this sched_domain already been visited? */
> + if (sd->child && group_first_cpu(sg) != cpu)
> + break;
> +
> + do {
> + struct sched_group *sg_cap_util;
> + unsigned long group_util;
> + int sg_busy_energy, sg_idle_energy, cap_idx;
> +
> + if (sg_shared_cap && sg_shared_cap->group_weight >= sg->group_weight)
> + sg_cap_util = sg_shared_cap;
> + else
> + sg_cap_util = sg;
> +
> + cap_idx = find_new_capacity(sg_cap_util, sg->sge);
> + group_util = group_norm_usage(sg, cap_idx);
> + sg_busy_energy = (group_util * sg->sge->cap_states[cap_idx].power)
> + >> SCHED_CAPACITY_SHIFT;
> + sg_idle_energy = ((SCHED_LOAD_SCALE-group_util) * sg->sge->idle_states[0].power)
> + >> SCHED_CAPACITY_SHIFT;
> +
> + total_energy += sg_busy_energy + sg_idle_energy;
> +
> + if (!sd->child)
> + cpumask_xor(&visit_cpus, &visit_cpus, sched_group_cpus(sg));
> +
> + if (cpumask_equal(sched_group_cpus(sg), sched_group_cpus(sg_top)))
> + goto next_cpu;
> +
> + } while (sg = sg->next, sg != sd->groups);
> + }
> +next_cpu:
> + continue;
> + }
> +
> + return total_energy;
> +}
> +
> static int wake_wide(struct task_struct *p)
> {
> int factor = this_cpu_read(sd_llc_size);
> --
> 1.9.1
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
next prev parent reply other threads:[~2015-09-02 17:19 UTC|newest]
Thread overview: 155+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-07-07 18:23 [RFCv5 PATCH 00/46] sched: Energy cost model for energy-aware scheduling Morten Rasmussen
2015-07-07 18:23 ` [RFCv5 PATCH 01/46] arm: Frequency invariant scheduler load-tracking support Morten Rasmussen
2015-07-21 15:41 ` [RFCv5, " Leo Yan
2015-07-22 13:31 ` Morten Rasmussen
2015-07-22 14:59 ` Leo Yan
2015-07-23 11:06 ` Morten Rasmussen
2015-07-23 14:22 ` Leo Yan
2015-07-24 9:43 ` Morten Rasmussen
2015-08-03 9:22 ` [RFCv5 PATCH " Vincent Guittot
2015-08-17 15:59 ` Dietmar Eggemann
2015-08-11 9:27 ` Peter Zijlstra
2015-08-14 16:08 ` Morten Rasmussen
2015-07-07 18:23 ` [RFCv5 PATCH 02/46] sched: Make load tracking frequency scale-invariant Morten Rasmussen
2015-07-07 18:23 ` [RFCv5 PATCH 03/46] arm: vexpress: Add CPU clock-frequencies to TC2 device-tree Morten Rasmussen
2015-07-08 12:36 ` Jon Medhurst (Tixy)
2015-07-10 13:35 ` Dietmar Eggemann
2015-07-07 18:23 ` [RFCv5 PATCH 04/46] sched: Convert arch_scale_cpu_capacity() from weak function to #define Morten Rasmussen
2015-07-07 18:23 ` [RFCv5 PATCH 05/46] arm: Update arch_scale_cpu_capacity() to reflect change to define Morten Rasmussen
2015-07-07 18:23 ` [RFCv5 PATCH 06/46] sched: Make usage tracking cpu scale-invariant Morten Rasmussen
2015-07-07 18:23 ` [RFCv5 PATCH 07/46] arm: Cpu invariant scheduler load-tracking support Morten Rasmussen
2015-07-07 18:23 ` [RFCv5 PATCH 08/46] sched: Get rid of scaling usage by cpu_capacity_orig Morten Rasmussen
2015-07-07 18:23 ` [RFCv5 PATCH 09/46] sched: Track blocked utilization contributions Morten Rasmussen
2015-07-07 18:23 ` [RFCv5 PATCH 10/46] sched: Include blocked utilization in usage tracking Morten Rasmussen
2015-07-07 18:23 ` [RFCv5 PATCH 11/46] sched: Remove blocked load and utilization contributions of dying tasks Morten Rasmussen
2015-07-22 6:51 ` Leo Yan
2015-07-22 13:45 ` Morten Rasmussen
2015-08-11 11:39 ` Peter Zijlstra
2015-08-11 14:58 ` Morten Rasmussen
2015-08-11 17:23 ` Peter Zijlstra
2015-08-12 9:08 ` Morten Rasmussen
2015-07-07 18:23 ` [RFCv5 PATCH 12/46] sched: Initialize CFS task load and usage before placing task on rq Morten Rasmussen
2015-07-07 18:23 ` [RFCv5 PATCH 13/46] sched: Documentation for scheduler energy cost model Morten Rasmussen
2015-07-07 18:23 ` [RFCv5 PATCH 14/46] sched: Make energy awareness a sched feature Morten Rasmussen
2015-07-07 18:23 ` [RFCv5 PATCH 15/46] sched: Introduce energy data structures Morten Rasmussen
2015-07-07 18:23 ` [RFCv5 PATCH 16/46] sched: Allocate and initialize " Morten Rasmussen
2015-08-12 10:04 ` Peter Zijlstra
2015-08-12 17:08 ` Dietmar Eggemann
2015-08-12 10:17 ` Peter Zijlstra
2015-08-12 17:09 ` Dietmar Eggemann
2015-08-12 17:23 ` Peter Zijlstra
2015-07-07 18:24 ` [RFCv5 PATCH 17/46] sched: Introduce SD_SHARE_CAP_STATES sched_domain flag Morten Rasmussen
2015-07-07 18:24 ` [RFCv5 PATCH 18/46] arm: topology: Define TC2 energy and provide it to the scheduler Morten Rasmussen
2015-08-12 10:33 ` Peter Zijlstra
2015-08-12 18:47 ` Dietmar Eggemann
2015-08-17 9:19 ` [RFCv5, " Leo Yan
2015-08-20 19:19 ` Dietmar Eggemann
2015-07-07 18:24 ` [RFCv5 PATCH 19/46] sched: Compute cpu capacity available at current frequency Morten Rasmussen
2015-07-07 18:24 ` [RFCv5 PATCH 20/46] sched: Relocated get_cpu_usage() and change return type Morten Rasmussen
2015-08-12 10:59 ` Peter Zijlstra
2015-08-12 14:40 ` Morten Rasmussen
2015-07-07 18:24 ` [RFCv5 PATCH 21/46] sched: Highest energy aware balancing sched_domain level pointer Morten Rasmussen
2015-07-07 18:24 ` [RFCv5 PATCH 22/46] sched: Calculate energy consumption of sched_group Morten Rasmussen
2015-08-13 15:34 ` Peter Zijlstra
2015-08-14 10:28 ` Morten Rasmussen
2015-09-02 17:19 ` Leo Yan [this message]
2015-09-17 16:41 ` Morten Rasmussen
2015-07-07 18:24 ` [RFCv5 PATCH 23/46] sched: Extend sched_group_energy to test load-balancing decisions Morten Rasmussen
2015-07-07 18:24 ` [RFCv5 PATCH 24/46] sched: Estimate energy impact of scheduling decisions Morten Rasmussen
2015-07-07 18:24 ` [RFCv5 PATCH 25/46] sched: Add over-utilization/tipping point indicator Morten Rasmussen
2015-08-13 17:35 ` Peter Zijlstra
2015-08-14 13:02 ` Morten Rasmussen
2015-09-29 20:08 ` Steve Muckle
2015-10-09 12:49 ` Morten Rasmussen
2015-08-17 13:10 ` Leo Yan
2015-07-07 18:24 ` [RFCv5 PATCH 26/46] sched: Store system-wide maximum cpu capacity in root domain Morten Rasmussen
2015-07-07 18:24 ` [RFCv5 PATCH 27/46] sched, cpuidle: Track cpuidle state index in the scheduler Morten Rasmussen
2015-07-21 6:41 ` Leo Yan
2015-07-21 15:16 ` Morten Rasmussen
2015-07-07 18:24 ` [RFCv5 PATCH 28/46] sched: Count number of shallower idle-states in struct sched_group_energy Morten Rasmussen
2015-08-13 18:10 ` Peter Zijlstra
2015-08-14 19:08 ` Sai Gurrappadi
2015-07-07 18:24 ` [RFCv5 PATCH 29/46] sched: Determine the current sched_group idle-state Morten Rasmussen
2015-07-07 18:24 ` [RFCv5 PATCH 30/46] sched: Add cpu capacity awareness to wakeup balancing Morten Rasmussen
2015-08-13 18:24 ` Peter Zijlstra
2015-08-14 16:20 ` Morten Rasmussen
2015-07-07 18:24 ` [RFCv5 PATCH 31/46] sched: Consider spare cpu capacity at task wake-up Morten Rasmussen
2015-07-21 0:37 ` Sai Gurrappadi
2015-07-21 15:12 ` Morten Rasmussen
2015-07-07 18:24 ` [RFCv5 PATCH 32/46] sched: Energy-aware wake-up task placement Morten Rasmussen
2015-07-17 0:10 ` Sai Gurrappadi
2015-07-20 15:38 ` Morten Rasmussen
2015-08-17 16:23 ` Leo Yan
2015-09-02 17:11 ` Leo Yan
2015-09-18 10:34 ` Dietmar Eggemann
2015-09-20 18:39 ` Steve Muckle
2015-09-20 22:03 ` Leo Yan
2015-09-29 0:15 ` Steve Muckle
2015-07-07 18:24 ` [RFCv5 PATCH 33/46] sched: Consider a not over-utilized energy-aware system as balanced Morten Rasmussen
2015-07-07 18:24 ` [RFCv5 PATCH 34/46] sched: Enable idle balance to pull single task towards cpu with higher capacity Morten Rasmussen
2015-08-15 9:15 ` Peter Zijlstra
2015-07-07 18:24 ` [RFCv5 PATCH 35/46] sched: Disable energy-unfriendly nohz kicks Morten Rasmussen
2015-08-15 9:33 ` Peter Zijlstra
2015-07-07 18:24 ` [RFCv5 PATCH 36/46] sched: Prevent unnecessary active balance of single task in sched group Morten Rasmussen
2015-08-15 9:46 ` Peter Zijlstra
2015-07-07 18:24 ` [RFCv5 PATCH 37/46] cpufreq: introduce cpufreq_driver_might_sleep Morten Rasmussen
2015-07-08 15:09 ` Michael Turquette
2015-07-07 18:24 ` [RFCv5 PATCH 38/46] sched: scheduler-driven cpu frequency selection Morten Rasmussen
2015-07-08 15:09 ` Michael Turquette
2015-08-11 2:14 ` Leo Yan
2015-08-11 8:59 ` Juri Lelli
2015-08-15 12:35 ` Peter Zijlstra
2015-09-04 13:27 ` Juri Lelli
2015-09-14 15:57 ` Juri Lelli
2015-09-15 13:45 ` Peter Zijlstra
2015-09-15 16:22 ` Juri Lelli
2015-08-15 13:05 ` Peter Zijlstra
2015-08-25 10:45 ` Juri Lelli
2015-10-08 0:14 ` Steve Muckle
2015-10-08 9:41 ` Juri Lelli
2015-09-28 16:48 ` Punit Agrawal
2015-09-29 0:26 ` Steve Muckle
2015-07-07 18:24 ` [RFCv5 PATCH 39/46] sched/cpufreq_sched: use static key for " Morten Rasmussen
2015-07-08 15:19 ` Michael Turquette
2015-07-10 9:50 ` Juri Lelli
2015-08-15 12:40 ` Peter Zijlstra
2015-07-07 18:24 ` [RFCv5 PATCH 40/46] sched/cpufreq_sched: compute freq_new based on capacity_orig_of() Morten Rasmussen
2015-07-08 15:22 ` Michael Turquette
2015-07-09 16:21 ` Juri Lelli
2015-08-15 12:46 ` Peter Zijlstra
2015-08-16 4:03 ` Michael Turquette
2015-08-16 20:24 ` Peter Zijlstra
2015-08-17 12:19 ` Juri Lelli
2015-10-13 19:47 ` Steve Muckle
2015-07-07 18:24 ` [RFCv5 PATCH 41/46] sched/fair: add triggers for OPP change requests Morten Rasmussen
2015-07-08 15:42 ` Michael Turquette
2015-07-09 16:52 ` Juri Lelli
2015-08-04 13:41 ` Vincent Guittot
2015-08-10 13:43 ` Juri Lelli
2015-08-10 15:07 ` Vincent Guittot
2015-08-11 9:08 ` Juri Lelli
2015-08-11 11:41 ` Vincent Guittot
2015-08-11 15:07 ` Juri Lelli
2015-08-11 16:37 ` Vincent Guittot
2015-08-12 15:15 ` Juri Lelli
2015-08-13 12:08 ` Vincent Guittot
2015-08-14 11:39 ` Juri Lelli
2015-08-17 9:43 ` Vincent Guittot
2015-08-15 12:48 ` Peter Zijlstra
2015-08-16 3:50 ` Michael Turquette
2015-08-17 18:22 ` Rafael J. Wysocki
2015-07-07 18:24 ` [RFCv5 PATCH 42/46] sched/{core,fair}: trigger OPP change request on fork() Morten Rasmussen
2015-07-07 18:24 ` [RFCv5 PATCH 43/46] sched/{fair,cpufreq_sched}: add reset_capacity interface Morten Rasmussen
2015-10-08 20:40 ` Steve Muckle
2015-10-09 9:14 ` Juri Lelli
2015-10-12 19:02 ` Steve Muckle
2015-07-07 18:24 ` [RFCv5 PATCH 44/46] sched/fair: jump to max OPP when crossing UP threshold Morten Rasmussen
2015-07-08 16:40 ` Michael Turquette
2015-07-08 16:47 ` Michael Turquette
2015-07-10 10:17 ` Juri Lelli
2015-07-07 18:24 ` [RFCv5 PATCH 45/46] sched/cpufreq_sched: modify pcpu_capacity handling Morten Rasmussen
2015-07-08 16:42 ` Michael Turquette
2015-07-09 16:55 ` Juri Lelli
2015-08-16 20:35 ` Peter Zijlstra
2015-08-17 11:16 ` Juri Lelli
2015-07-07 18:24 ` [RFCv5 PATCH 46/46] sched/fair: cpufreq_sched triggers for load balancing Morten Rasmussen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150902171923.GB12510@leoy-linaro \
--to=leo.yan@linaro.org \
--cc=Dietmar.Eggemann@arm.com \
--cc=Juri.Lelli@arm.com \
--cc=daniel.lezcano@linaro.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pm@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=morten.rasmussen@arm.com \
--cc=mturquette@baylibre.com \
--cc=pang.xunlei@zte.com.cn \
--cc=peterz@infradead.org \
--cc=rjw@rjwysocki.net \
--cc=sgurrappadi@nvidia.com \
--cc=vincent.guittot@linaro.org \
--cc=yuyang.du@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).