From: Ingo Molnar <mingo@elte.hu>
To: Bill Huey <billh@gnuppy.monkey.org>
Cc: Darren Hart <darren@dvhart.com>,
linux-kernel@vger.kernel.org,
Thomas Gleixner <tglx@linutronix.de>,
"Stultz, John" <johnstul@us.ibm.com>,
Peter Williams <pwil3058@bigpond.net.au>,
"Siddha, Suresh B" <suresh.b.siddha@intel.com>,
Nick Piggin <nickpiggin@yahoo.com.au>
Subject: Re: RT task scheduling
Date: Sat, 8 Apr 2006 10:03:49 +0200 [thread overview]
Message-ID: <20060408080349.GA19195@elte.hu> (raw)
In-Reply-To: <20060408075430.GA19403@gnuppy.monkey.org>
* Bill Huey <billh@gnuppy.monkey.org> wrote:
> The last time I looked at it I thought it did something pretty
> simplistic in that it just dumped any RT thread to another CPU but
> didn't do it in a strict manner with regard to priority. Maybe that's
> changed or else I didn't pay attention to it that as carefully as I
> thought.
well as Darren's testcase shows, it might still have some bug - but the
mechanism is intended to be strict. (the implementation had a couple of
strictness bugs (they show up as long latencies on SMP) but those were
ironed out months ago.)
> As far as CPU binding goes, I'm wanting a method of getting around the
> latency of the rt overload logic in certain cases at the expense of
> rebalancing. That's what I ment by it.
yeah, that certainly makes sense, and it's one reason why i'm thinking
about the separate SCHED_FIFO_GLOBAL policy for 'globally scheduled' RT
tasks, while still keeping the current lightweight non-global RT
scheduling. Global scheduling either means a global lock, or as in the
-rt implementation means a "global IPI", but there's always a nontrivial
"global" cost involved.
Ingo
next prev parent reply other threads:[~2006-04-08 8:06 UTC|newest]
Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top
2006-04-06 3:25 RT task scheduling Darren Hart
2006-04-06 4:19 ` Peter Williams
2006-04-06 17:24 ` Darren Hart
2006-04-06 23:02 ` Peter Williams
2006-04-06 7:37 ` Ingo Molnar
2006-04-06 14:55 ` Darren Hart
2006-04-06 18:16 ` Darren Hart
2006-04-06 22:35 ` Darren Hart
2006-04-07 22:58 ` Vernon Mauery
2006-04-06 23:06 ` Peter Williams
2006-04-07 3:07 ` Bill Huey
2006-04-07 7:11 ` Ingo Molnar
2006-04-07 8:39 ` Bill Huey
2006-04-07 9:11 ` Bill Huey
2006-04-07 9:19 ` Ingo Molnar
2006-04-07 10:39 ` Bill Huey
2006-04-07 10:51 ` Ingo Molnar
2006-04-07 11:14 ` Bill Huey
2006-04-07 11:29 ` Ingo Molnar
2006-04-07 22:18 ` Bill Huey
2006-04-07 14:56 ` Darren Hart
2006-04-07 21:06 ` Bill Huey
2006-04-07 22:37 ` Darren Hart
2006-04-07 23:36 ` Bill Huey
2006-04-08 3:01 ` Steven Rostedt
2006-04-08 4:28 ` Vernon Mauery
2006-04-08 4:45 ` Steven Rostedt
2006-04-08 7:16 ` Ingo Molnar
2006-04-08 7:25 ` Ingo Molnar
2006-04-08 7:54 ` Bill Huey
2006-04-08 8:03 ` Ingo Molnar [this message]
2006-04-08 10:02 ` Bill Huey
2006-04-08 0:11 ` Peter Williams
2006-04-07 9:23 ` Bill Huey
2006-04-09 13:16 ` Ingo Molnar
2006-04-09 17:25 ` Darren Hart
2006-04-09 18:31 ` Ingo Molnar
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20060408080349.GA19195@elte.hu \
--to=mingo@elte.hu \
--cc=billh@gnuppy.monkey.org \
--cc=darren@dvhart.com \
--cc=johnstul@us.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=nickpiggin@yahoo.com.au \
--cc=pwil3058@bigpond.net.au \
--cc=suresh.b.siddha@intel.com \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox