From: Arjan van de Ven <arjan@linux.intel.com>
To: Morten Rasmussen <morten.rasmussen@arm.com>
Cc: "mingo@kernel.org" <mingo@kernel.org>,
"peterz@infradead.org" <peterz@infradead.org>,
"vincent.guittot@linaro.org" <vincent.guittot@linaro.org>,
"preeti@linux.vnet.ibm.com" <preeti@linux.vnet.ibm.com>,
"alex.shi@intel.com" <alex.shi@intel.com>,
"efault@gmx.de" <efault@gmx.de>,
"pjt@google.com" <pjt@google.com>,
"len.brown@intel.com" <len.brown@intel.com>,
"corbet@lwn.net" <corbet@lwn.net>,
"akpm@linux-foundation.org" <akpm@linux-foundation.org>,
"torvalds@linux-foundation.org" <torvalds@linux-foundation.org>,
"tglx@linutronix.de" <tglx@linutronix.de>,
Catalin Marinas <Catalin.Marinas@arm.com>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"linaro-kernel@lists.linaro.org" <linaro-kernel@lists.linaro.org>
Subject: Re: [RFC][PATCH 0/9] sched: Power scheduler design proposal
Date: Wed, 10 Jul 2013 06:05:00 -0700 [thread overview]
Message-ID: <51DD5BFC.8000102@linux.intel.com> (raw)
In-Reply-To: <20130710111627.GC15989@e103687>
>
>>
>> also, it almost looks like there is a fundamental assumption in the code
>> that you can get the current effective P state to make scheduler decisions on;
>> on Intel at least that is basically impossible... and getting more so with every generation
>> (likewise for AMD afaics)
>>
>> (you can get what you ran at on average over some time in the past, but not
>> what you're at now or going forward)
>>
>
> As described above, it is not a strict assumption. From a scheduler
> point of view we somehow need to know if the cpus are truly fully
> utilized (at their highest P-state)
unfortunately we can't provide this on Intel ;-(
we can provide you what you ran at average, we cannot provide you if that is the max or not
(first of all, because we outright don't know what the max would have been, and second,
because we may be running slower than max because the workload was memory bound or
any of the other conditions that makes the HW P state "governor" decide to reduce
frequency for efficiency reasons)
> so we need to throw more cpus at the
> problem (assuming that we have more than one task per cpu) or if we can
> just go to a higher P-state. We don't need a strict guarantee that we
> get exactly the P-state that we request for each cpu. The power
> scheduler generates hints and the power driver gives us feedback on what
> we can roughly expect to get.
>
>> I'm rather nervous about calculating how many cores you want active as a core scheduler feature.
>> I understand that for your big.LITTLE architecture you need this due to the asymmetry,
>> but as a general rule for more symmetric systems it's known to be suboptimal by quite a
>> real percentage. For a normal Intel single CPU system it's sort of the worst case you can do
>> in that it leads to serializing tasks that could have run in parallel over multiple cores/threads.
>> So at minimum this kind of logic must be enabled/disabled based on architecture decisions.
>
> Packing clearly has to take power topology into account and do the right
> thing for the particular platform. It is not in place yet, but will be
> addressed. I believe it would make sense for dual cpu Intel systems to
> pack at socket level?
a little bit. if you have 2 quad core systems, it will make sense to pack 2 tasks
onto a single core, assuming they are not cache or memory bandwidth bound (remember this is numa!)
but if you have 4 tasks, it's not likely to be worth it to pack, unless you get an enormous
economy of scale due to cache sharing
(this is far more about getting numa balancing right than about power; you're not very likely
to win back the power you loose from inefficiency if you get the numa side wrong by being
too smart about power placement)
next prev parent reply other threads:[~2013-07-10 13:05 UTC|newest]
Thread overview: 64+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-07-09 15:55 [RFC][PATCH 0/9] sched: Power scheduler design proposal Morten Rasmussen
2013-07-09 15:55 ` [RFC][PATCH 1/9] sched: Introduce power scheduler Morten Rasmussen
2013-07-09 16:48 ` Arjan van de Ven
2013-07-10 2:10 ` Arjan van de Ven
2013-07-10 11:11 ` Morten Rasmussen
2013-07-10 11:19 ` Vincent Guittot
2013-07-09 15:55 ` [RFC][PATCH 2/9] sched: Redirect update_cpu_power to sched/power.c Morten Rasmussen
2013-07-09 15:55 ` [RFC][PATCH 3/9] sched: Make select_idle_sibling() skip cpu with a cpu_power of 1 Morten Rasmussen
2013-07-09 15:55 ` [RFC][PATCH 4/9] sched: Make periodic load-balance disregard cpus " Morten Rasmussen
2013-07-09 15:55 ` [RFC][PATCH 5/9] sched: Make idle_balance() skip " Morten Rasmussen
2013-07-09 15:55 ` [RFC][PATCH 6/9] sched: power: add power_domain data structure Morten Rasmussen
2013-07-09 15:55 ` [RFC][PATCH 7/9] sched: power: Add power driver interface Morten Rasmussen
2013-07-09 15:55 ` [RFC][PATCH 8/9] sched: power: Add initial frequency scaling support to power scheduler Morten Rasmussen
2013-07-10 13:10 ` Arjan van de Ven
2013-07-12 12:51 ` Morten Rasmussen
2013-07-12 13:06 ` Catalin Marinas
2013-07-12 15:37 ` Arjan van de Ven
2013-07-09 15:55 ` [RFC][PATCH 9/9] sched: power: cpufreq: Initial schedpower cpufreq governor Morten Rasmussen
2013-07-09 16:58 ` [RFC][PATCH 0/9] sched: Power scheduler design proposal Arjan van de Ven
2013-07-10 11:16 ` Morten Rasmussen
2013-07-10 13:05 ` Arjan van de Ven [this message]
2013-07-12 12:46 ` Morten Rasmussen
2013-07-12 15:35 ` Arjan van de Ven
2013-07-12 13:00 ` Catalin Marinas
2013-07-12 15:44 ` Arjan van de Ven
2013-07-11 11:34 ` Preeti U Murthy
2013-07-12 13:48 ` Morten Rasmussen
2013-07-15 3:43 ` Preeti U Murthy
2013-07-15 9:55 ` Catalin Marinas
2013-07-15 15:24 ` Arjan van de Ven
2013-07-12 13:31 ` Catalin Marinas
2013-07-13 6:49 ` Peter Zijlstra
2013-07-13 10:23 ` Catalin Marinas
2013-07-15 7:53 ` Vincent Guittot
2013-07-15 20:39 ` Peter Zijlstra
2013-07-16 12:42 ` Catalin Marinas
2013-07-16 15:23 ` Arjan van de Ven
2013-07-17 14:14 ` Catalin Marinas
2013-07-24 13:50 ` Morten Rasmussen
2013-07-24 15:16 ` Arjan van de Ven
2013-07-24 16:46 ` Morten Rasmussen
2013-07-24 16:48 ` Arjan van de Ven
2013-07-25 8:00 ` Morten Rasmussen
2013-07-13 14:40 ` Arjan van de Ven
2013-07-15 19:59 ` Peter Zijlstra
2013-07-15 20:37 ` Arjan van de Ven
2013-07-15 21:03 ` Peter Zijlstra
2013-07-15 22:46 ` Arjan van de Ven
2013-07-16 20:45 ` David Lang
2013-07-15 20:41 ` Arjan van de Ven
2013-07-15 21:06 ` Peter Zijlstra
2013-07-15 21:12 ` Peter Zijlstra
2013-07-15 22:52 ` Arjan van de Ven
2013-07-16 17:38 ` Peter Zijlstra
2013-07-16 18:44 ` Arjan van de Ven
2013-07-16 19:21 ` Peter Zijlstra
2013-07-16 19:57 ` Arjan van de Ven
2013-07-16 20:17 ` Peter Zijlstra
2013-07-16 20:21 ` Arjan van de Ven
2013-07-16 20:32 ` Arjan van de Ven
2013-07-15 22:46 ` Arjan van de Ven
2013-07-13 16:14 ` Arjan van de Ven
2013-07-15 2:05 ` Alex Shi
2013-07-24 13:16 ` Morten Rasmussen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=51DD5BFC.8000102@linux.intel.com \
--to=arjan@linux.intel.com \
--cc=Catalin.Marinas@arm.com \
--cc=akpm@linux-foundation.org \
--cc=alex.shi@intel.com \
--cc=corbet@lwn.net \
--cc=efault@gmx.de \
--cc=len.brown@intel.com \
--cc=linaro-kernel@lists.linaro.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=morten.rasmussen@arm.com \
--cc=peterz@infradead.org \
--cc=pjt@google.com \
--cc=preeti@linux.vnet.ibm.com \
--cc=tglx@linutronix.de \
--cc=torvalds@linux-foundation.org \
--cc=vincent.guittot@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).