From: preeti@linux.vnet.ibm.com (Preeti U Murthy)
To: linux-arm-kernel@lists.infradead.org
Subject: [RFC PATCH v3 5/6] sched: pack the idle load balance
Date: Tue, 23 Apr 2013 10:27:13 +0530 [thread overview]
Message-ID: <517614A9.50600@linux.vnet.ibm.com> (raw)
In-Reply-To: <5175F09E.1000304@intel.com>
Hi Alex,
I have one point below.
On 04/23/2013 07:53 AM, Alex Shi wrote:
> Thanks you, Preeti and Vincent to talk the power aware scheduler for
> details! believe this open discussion is helpful to conduct a a more
> comprehensive solution. :)
>
>> Hi Preeti,
>>
>> I have had a look at Alex patches but i have some concerns with his patches
>> -There no notion of power domain which is quite important when we speak
>> about power saving IMHO. Packing tasks has got an interest if the idle
>> CPUs can reach a useful low power state independently from busy CPUs.
>> Architectures have different low power state capabilities which must be
>> taken into account. In addition, you can have system which have CPUs
>> with better power efficiency and this kind of system are not taken into
>> account.
>
> I agree with you on this point. and like what's you done to add new flag
> in sched domain. It also make scheduler easy pick up new idea in balancing.
> BTW, Currently, the my balance is trying pack task per SMT, maybe
> packing task per cpu horse power is more compatible for other archs?
Correct me if I am wrong,but the scheduler today does not compare the
task load to the destination cpu power before moving the task to the
destination cpu.This could be during:
1. Load balancing: In move_tasks(), only the imbalance is verified
against the task load before moving tasks and does not necessarily check
if the destination cpu has enough cpu power to handle these tasks.
2. select_task_rq_fair(): For a forked task, the idlest cpu in the group
leader is found during power save balance( I am focussing only on the
power save policy),and is returned as the destination cpu for the forked
task.But I feel we need to check if the idle cpu has the cpu power to
handle the task load.
Why I am bringing about this point is due to a use case which we might
need to handle in the power aware scheduler going ahead.That being the
big.LITTLE cpus. We would ideally want the short running tasks on the
LITTLE cpus and the long running tasks on the big cpus.
While the power aware scheduler strives to pack tasks,it should not end
up packing long running tasks on LITTLE cpus. Not having big cpus to
handle short running tasks is the next step of course but atleast not
throttle the long running tasks by scheduling them on LITTLE cpus.
Thanks
Regards
Preeti U Murthy
next prev parent reply other threads:[~2013-04-23 4:57 UTC|newest]
Thread overview: 54+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-03-22 12:25 [RFC PATCH v3 0/6] sched: packing small tasks Vincent Guittot
2013-03-22 12:25 ` [RFC PATCH v3 1/6] Revert "sched: Introduce temporary FAIR_GROUP_SCHED dependency for load-tracking" Vincent Guittot
2013-03-22 12:25 ` [RFC PATCH v3 2/6] sched: add a new SD_SHARE_POWERDOMAIN flag for sched_domain Vincent Guittot
2013-03-22 12:25 ` [RFC PATCH v3 3/6] sched: pack small tasks Vincent Guittot
2013-03-26 12:26 ` Peter Zijlstra
2013-03-27 10:21 ` Preeti U Murthy
2013-03-27 11:00 ` Vincent Guittot
2013-04-26 10:30 ` Peter Zijlstra
2013-04-26 11:34 ` Vincent Guittot
2013-04-26 10:18 ` Peter Zijlstra
2013-04-26 10:32 ` Preeti U Murthy
2013-03-26 12:37 ` Peter Zijlstra
2013-03-26 13:00 ` Vincent Guittot
2013-03-27 4:33 ` Preeti U Murthy
2013-03-27 4:48 ` Alex Shi
2013-03-27 8:51 ` Peter Zijlstra
2013-03-26 12:46 ` Peter Zijlstra
2013-03-26 13:53 ` Vincent Guittot
2013-03-26 15:29 ` Arjan van de Ven
2013-03-27 8:46 ` Peter Zijlstra
2013-03-27 8:54 ` Vincent Guittot
2013-03-27 9:00 ` Peter Zijlstra
2013-03-27 11:18 ` Catalin Marinas
2013-03-27 14:13 ` Peter Zijlstra
2013-03-27 16:36 ` Catalin Marinas
2013-03-27 17:18 ` Nicolas Pitre
2013-03-27 17:37 ` Catalin Marinas
2013-03-27 17:20 ` Vincent Guittot
2013-03-27 18:01 ` Catalin Marinas
2013-03-27 15:37 ` Nicolas Pitre
2013-03-22 12:25 ` [RFC PATCH v3 4/6] sched: secure access to other CPU statistics Vincent Guittot
2013-03-26 12:50 ` Peter Zijlstra
2013-03-26 13:06 ` Vincent Guittot
2013-03-22 12:25 ` [RFC PATCH v3 5/6] sched: pack the idle load balance Vincent Guittot
2013-03-26 12:52 ` Peter Zijlstra
2013-03-26 14:03 ` Vincent Guittot
2013-03-26 14:42 ` Peter Zijlstra
2013-03-26 15:55 ` Vincent Guittot
2013-03-27 4:56 ` Alex Shi
2013-03-27 8:05 ` Vincent Guittot
2013-03-27 8:47 ` Alex Shi
2013-03-27 10:30 ` Vincent Guittot
2013-03-27 13:32 ` Alex Shi
2013-03-27 8:49 ` Peter Zijlstra
2013-04-05 11:08 ` Vincent Guittot
2013-04-22 5:45 ` Preeti U Murthy
[not found] ` <CAKfTPtCCCifC=c+xjjnAH_HSqkR80PiQoddQKXPHuZwZawbvcA@mail.gmail.com>
2013-04-23 2:23 ` Alex Shi
2013-04-23 4:57 ` Preeti U Murthy [this message]
2013-04-23 15:30 ` Arjan van de Ven
2013-04-26 10:54 ` Peter Zijlstra
2013-04-23 4:36 ` Preeti U Murthy
2013-03-22 12:25 ` [RFC PATCH v3 6/6] ARM: sched: clear SD_SHARE_POWERDOMAIN Vincent Guittot
2013-03-23 11:55 ` [RFC PATCH v3 0/6] sched: packing small tasks Preeti U Murthy
2013-03-25 9:58 ` Vincent Guittot
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=517614A9.50600@linux.vnet.ibm.com \
--to=preeti@linux.vnet.ibm.com \
--cc=linux-arm-kernel@lists.infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).