From: Con Kolivas <kernel@kolivas.org>
To: Lee Revell <rlrevell@joe-job.com>
Cc: Eric Piel <Eric.Piel@lifl.fr>,
Ken Moffat <ken@kenmoffat.uklinux.net>,
linux-kernel@vger.kernel.org
Subject: Re: ondemand cpufreq ineffective in 2.6.12 ?
Date: Wed, 13 Jul 2005 07:26:50 +1000 [thread overview]
Message-ID: <200507130726.52654.kernel@kolivas.org> (raw)
In-Reply-To: <1121180244.2632.55.camel@mindpipe>
[-- Attachment #1: Type: text/plain, Size: 2033 bytes --]
On Wed, 13 Jul 2005 00:57, Lee Revell wrote:
> On Tue, 2005-07-12 at 21:52 +1000, Con Kolivas wrote:
> > > Well, it's just the default settings of the kernel which has changed.
> > > If you want the old behaviour, you can use (with your admin hat): echo
> > > 1 > /sys/devices/system/cpu/cpu0/cpufreq/ondemand/ignore_nice IMHO it
> > > seems quite fair, if you have a process nice'd to 10 it probably means
> > > you are not in a hurry.
> >
> > That's not necessarily true. Most people use 'nice' to have the cpu bound
> > task not affect their foreground applications, _not_ because they don't
> > care how long they take.
>
> But the scheduler should do this on its own!
That is a most unusual thing to tell the person who tuned the crap out of the
2.6 scheduler so that it would do this.
> If people are having to
> renice kernel compiles to maintain decent interactive performance (and
> yes, I have to do the same thing sometimes) the scheduler is BROKEN,
> period.
Two tasks being the same nice level still implies they should receive the same
proportion of cpu. Anything else breaks the implied fairness of nice levels.
Whether you agree with this approach or not, it is expected. No, cpu
distribution is never _perfectly_ split 50/50 when nice levels are the same
but we try our best to do so while maintaining interactivity.
A more useful argument would be that you'd like to have two sets of nice
levels - one perhaps called latnice implying which tasks you want latency
critical but still want to have the same cpu and one called cpunice which
affects the amount of cpu but not the latency. Some would like complete
control over both nices while others want the scheduler to do everything for
them. Either way, you want a compile to be both latnice and cpunice. Our
current nice is both latnice and cpunice, and if nice levels are equal the
scheduler does a heck of a lot of its own latnice based on behaviour. It's
not perfect and nothing ever is.
Cheers,
Con
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
next prev parent reply other threads:[~2005-07-12 21:29 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2005-07-11 16:25 ondemand cpufreq ineffective in 2.6.12 ? Ken Moffat
2005-07-11 19:45 ` Ken Moffat
2005-07-11 21:55 ` Con Kolivas
2005-07-12 7:58 ` Eric Piel
2005-07-12 10:37 ` Ken Moffat
2005-07-12 11:11 ` Ken Moffat
2005-07-12 11:49 ` Eric Piel
2005-07-12 11:52 ` Con Kolivas
2005-07-12 14:57 ` Lee Revell
2005-07-12 21:26 ` Con Kolivas [this message]
2005-07-12 13:30 ` Ken Moffat
-- strict thread matches above, loose matches on Subject: below --
2005-07-12 11:07 Daniel J Blueman
2005-07-12 11:35 ` Ken Moffat
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=200507130726.52654.kernel@kolivas.org \
--to=kernel@kolivas.org \
--cc=Eric.Piel@lifl.fr \
--cc=ken@kenmoffat.uklinux.net \
--cc=linux-kernel@vger.kernel.org \
--cc=rlrevell@joe-job.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox