From: Steve Rotolo <steve.rotolo@ccur.com>
To: Con Kolivas <kernel@kolivas.org>
Cc: linux-kernel@vger.kernel.org, bugsy@ccur.com
Subject: Re: SD_SHARE_CPUPOWER breaks scheduler fairness
Date: Wed, 01 Jun 2005 14:41:25 -0400 [thread overview]
Message-ID: <1117651285.22879.73.camel@bonefish> (raw)
In-Reply-To: <200506020047.16752.kernel@kolivas.org>
On Wed, 2005-06-01 at 10:47, Con Kolivas wrote:
> I didn't miss the point, but I guess I should have made that clear too.
>
> The number of tasks seen running on that sibling is still the same even if the
> queue is forced to be idle (witness by top thinking the load is 1 on that
> sibling even if it also shows quite a lot of idle time). It should therefore
> not attract any more tasks to itself.
> The task that is there will be trapped based on the fact that there is only
> one task _only_ if the other sibling is indefinitely running real time tasks,
> and _if_ there are other physical cpus we can use we should try to schedule
> the trapped task away. If we have N physical cpus (and N*2 logical), and we
> are running N real time threads I don't think we should expect to run
> SCHED_NORMAL tasks as well. If we have <N real time tasks (where N > 1) then
> we should still be able to run SCHED_NORMAL tasks, I agree. I'm a little
> reluctant to tackle this at this stage with the number of SMP balancing
> things already queued for -mm, but making a sibling appear more heavily laden
> when "pegged" (nr_running + 1) should suffice.
>
Consider what happens if:
- you have 2 physical cpus, 4 logical cpus
- you have 40 running SCHED_NORMAL tasks on a well balanced system --
roughly 10 on each runqueue
- start up a spinning SCHED_FIFO task on cpu 0
Assuming that cpu 1 is the sibling of 0, cpu 1 now has 10 SCHED_NORMAL
tasks that are totally screwed -- they will never, ever, run anywhere,
period.
Now consider what happens if I start up 40 more SCHED_NORMAL tasks. The
load-balancer will kindly place 10 of them on cpu 1's runqueue so they
too can be screwed for all eternity. Nice.
One more thing: I *think* wake_idle() tends to wake tasks to idle cpus
regardless of the idle cpu's runqueue length. This is why I say the
idle cpu becomes a magnet for even more tasks, until the balancer
straightens things out again.
I guess the bottom-line is: given N logical cpus, 1/N of all
SCHED_NORMAL tasks may get stuck on a sibling cpu with no chance to
run. All it takes is one spinning SCHED_FIFO task. Sounds like a bug.
--
Steve
next prev parent reply other threads:[~2005-06-01 18:55 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2005-05-31 17:46 SD_SHARE_CPUPOWER breaks scheduler fairness Steve Rotolo
2005-06-01 2:49 ` Con Kolivas
2005-06-01 14:29 ` Steve Rotolo
2005-06-01 14:47 ` Con Kolivas
2005-06-01 18:41 ` Steve Rotolo [this message]
2005-06-01 21:37 ` Con Kolivas
2005-06-01 21:54 ` Con Kolivas
2005-06-01 22:01 ` Steve Rotolo
2005-06-02 3:01 ` Con Kolivas
2005-06-01 23:16 ` Joe Korty
2005-06-01 23:25 ` Con Kolivas
2005-06-02 13:30 ` Steve Rotolo
2005-06-02 13:34 ` Con Kolivas
2005-06-02 15:48 ` Steve Rotolo
2005-06-03 0:43 ` [PATCH] SCHED: run SCHED_NORMAL tasks with real time tasks on SMT siblings Con Kolivas
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1117651285.22879.73.camel@bonefish \
--to=steve.rotolo@ccur.com \
--cc=bugsy@ccur.com \
--cc=kernel@kolivas.org \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox