From: Eric Piel <Eric.Piel@lifl.fr>
To: paulmck@us.ibm.com
Cc: linux-kernel@vger.kernel.org, bhuey@lnxw.com, andrea@suse.de,
tglx@linutronix.de, karim@opersys.com, mingo@elte.hu,
pmarques@grupopie.com, bruce@andrew.cmu.edu,
nickpiggin@yahoo.com.au, ak@muc.de, sdietrich@mvista.com,
dwalker@mvista.com, hch@infradead.org, akpm@osdl.org
Subject: Re: Attempted summary of "RT patch acceptance" thread
Date: Fri, 10 Jun 2005 23:58:27 +0200 [thread overview]
Message-ID: <42AA0D03.2090505@lifl.fr> (raw)
In-Reply-To: <20050609022041.GG1295@us.ibm.com>
06/09/2005 04:20 AM, Paul E. McKenney wrote/a écrit:
>>Concerning the QoS, we have been able to obtain hard realtime, at least
>>very firm real-time. Tests were conducted over 8 hours on IA-64 and x86
>>and gave respectively 105µs and 40µs of maximum latency. Not as good as
>>you have mentioned but mostly of the same order :-)
>
>
> Quite impressive! So, does this qualify as "ruby hard", or is it only
> "metal hard"? ;-)
Well, you have to consider that this is still full Linux running. All
the best we can do is to not make it crash or hung more than the vanilla
kernel, it's still vulnerable to any bug of any driver :-/ In addition,
I highly doubt this approach can ever have an implementation were the
maximum latency is theoritically proven. The best we have is just
measurements of the system running with high loads during very long time.
>
> The service measured was process scheduling, right?
Yes, on IA64, from the hardware IRQ fireing to process scheduling (on
x86 it's from kernel IRQ handling to process scheduling).
>>Concerning the "e. fault isolation", on our implementation, holding a
>>lock, mutex or semaphore will automatically migrate the task, therefore
>>it's not a problem. Of course, some parts of the kernel that cannot be
>>migrated might take a lock, namely the scheduler. For the scheduler, we
>>modified most of the data structures requiring a lock so that they can
>>be accessed locklessly (it's the hardest part of the implementation).
>
>
> Are the non-migrateable portions of the scheduler small enough that
> one could do a worst-case timing analysis? Preferably aided by some
> automation...
Well, ARTiS only modifies the schedule() function but there is probably
too much possible interaction to really be able to prove anything (the
fact that it's a SMP system doesn't help!).
> One approach would be to mark the migrated task so that it returns
> to the realtime CPU as soon as it completes the realtime-unsafe
> operation.
We use a different approach: keep (small) statistics of the task doing
often lock and the one that are more "computational". Then we migrate in
priority the tasks that don't do locks. Your suggestion could be used
at the same time but it might not be so efficient anymore. Additionally,
in the current implementation, it's not so easy to know when a task
which is running can go back to a RT CPU.
> Another approach is to insert a virtualization layer (think in terms of
> a very cut-down variant of Xen) that tells the OS that there are two
CPUs.
> This layer then gives realtime service to one, but not to the other.
> That way, the OS thinks that it has two CPUs, and can still do the
> migration tricks despite having only one real CPU.
Simulation of an SMP on a UP? This sounds quite heavy but it might be
interesting to try :-)
>
> Anyway, interesting approach!
Thanks,
Eric
next prev parent reply other threads:[~2005-06-10 21:59 UTC|newest]
Thread overview: 102+ messages / expand[flat|nested] mbox.gz Atom feed top
2005-06-08 15:54 Attempted summary of "RT patch acceptance" thread Eric Piel
2005-06-09 2:20 ` Paul E. McKenney
2005-06-10 21:58 ` Eric Piel [this message]
2005-06-11 1:55 ` Paul E. McKenney
-- strict thread matches above, loose matches on Subject: below --
2005-06-13 22:43 Saksena, Manas
2005-06-13 22:20 Saksena, Manas
2005-06-13 22:42 ` Karim Yaghmour
2005-06-13 22:44 ` Karim Yaghmour
2005-06-13 22:43 ` Bill Huey
2005-06-08 2:26 Paul E. McKenney
2005-06-08 3:00 ` Karim Yaghmour
2005-06-08 14:47 ` Paul E. McKenney
2005-06-08 16:51 ` Karim Yaghmour
2005-06-09 2:25 ` Paul E. McKenney
2005-06-09 11:20 ` Philippe Gerum
2005-06-08 18:46 ` Chris Friesen
2005-06-08 19:28 ` Paul E. McKenney
2005-06-10 22:25 ` Eric Piel
2005-06-10 23:04 ` Paul E. McKenney
2005-06-10 23:23 ` Eric Piel
2005-06-11 0:59 ` Paul E. McKenney
2005-06-11 1:38 ` Eric Piel
2005-06-11 1:47 ` Paul E. McKenney
2005-06-09 23:34 ` Tim Bird
2005-06-09 23:50 ` Paul E. McKenney
2005-06-10 2:59 ` Lee Revell
2005-06-10 15:47 ` Paul E. McKenney
2005-06-10 17:37 ` Andrea Arcangeli
2005-06-10 19:39 ` Bill Huey
2005-06-10 19:41 ` Lee Revell
2005-06-10 20:26 ` Karim Yaghmour
2005-06-10 22:37 ` Bill Huey
2005-06-10 22:43 ` Bill Huey
2005-06-10 22:52 ` Andrea Arcangeli
2005-06-10 23:08 ` Bill Huey
2005-06-10 23:29 ` Andrea Arcangeli
2005-06-11 1:41 ` Paul E. McKenney
2005-06-11 1:50 ` Karim Yaghmour
2005-06-11 2:06 ` Paul E. McKenney
2005-06-11 15:54 ` Andrea Arcangeli
2005-06-11 21:04 ` Paul E. McKenney
2005-06-11 23:48 ` Karim Yaghmour
2005-06-12 17:06 ` Andrea Arcangeli
2005-06-12 21:45 ` Paul E. McKenney
2005-06-13 1:35 ` Karim Yaghmour
2005-06-13 14:40 ` Paul E. McKenney
2005-06-13 19:49 ` Karim Yaghmour
2005-06-13 20:03 ` Daniel Walker
2005-06-13 20:21 ` Paul E. McKenney
2005-06-13 20:26 ` Karim Yaghmour
2005-06-13 20:23 ` Lee Revell
2005-06-13 20:28 ` Daniel Walker
2005-06-13 22:00 ` Karim Yaghmour
2005-06-13 22:11 ` Karim Yaghmour
2005-06-13 22:18 ` Bill Huey
2005-06-13 22:28 ` Karim Yaghmour
2005-06-13 22:29 ` Bill Huey
2005-06-13 22:55 ` Karim Yaghmour
2005-06-14 1:13 ` Nicolas Pitre
2005-06-14 2:07 ` Karim Yaghmour
2005-06-14 2:35 ` Nicolas Pitre
2005-06-14 2:37 ` Nicolas Pitre
2005-06-14 3:24 ` Karim Yaghmour
2005-06-14 16:41 ` Gerrit Huizenga
2005-06-14 19:20 ` Bill Huey
2005-06-14 19:35 ` Valdis.Kletnieks
2005-06-14 21:29 ` Gene Heskett
2005-06-14 20:19 ` Gerrit Huizenga
2005-06-14 7:00 ` Eugeny S. Mints
2005-06-14 16:09 ` Gerrit Huizenga
2005-06-14 16:47 ` Andrea Arcangeli
2005-06-13 20:38 ` Bill Huey
2005-06-13 20:10 ` Paul E. McKenney
2005-06-13 20:31 ` Bill Huey
2005-06-13 20:58 ` Paul E. McKenney
2005-06-13 20:34 ` Karim Yaghmour
2005-06-13 21:02 ` Paul E. McKenney
2005-06-12 17:01 ` Andrea Arcangeli
2005-06-12 18:43 ` Lee Revell
2005-06-12 19:12 ` Bill Huey
2005-06-11 5:23 ` Ingo Molnar
2005-06-11 17:24 ` Andrea Arcangeli
2005-06-10 20:22 ` Daniel Walker
2005-06-10 20:45 ` Lee Revell
2005-06-10 21:06 ` Andrea Arcangeli
2005-06-10 22:19 ` Bill Huey
2005-06-10 22:37 ` Andrea Arcangeli
2005-06-10 22:49 ` Daniel Walker
2005-06-10 23:01 ` Bill Huey
2005-06-10 23:05 ` Andrea Arcangeli
2005-06-10 23:15 ` Bill Huey
2005-06-10 23:16 ` Paul E. McKenney
2005-06-10 23:26 ` Bill Huey
2005-06-10 23:36 ` Zwane Mwaikambo
2005-06-10 23:41 ` Bill Huey
2005-06-10 23:46 ` Lee Revell
2005-06-11 1:07 ` Paul E. McKenney
2005-06-11 15:16 ` Andrea Arcangeli
2005-06-11 20:32 ` Paul E. McKenney
2005-06-11 0:48 ` Paul E. McKenney
2005-06-10 20:38 ` Lee Revell
2005-06-10 23:12 ` Paul E. McKenney
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=42AA0D03.2090505@lifl.fr \
--to=eric.piel@lifl.fr \
--cc=ak@muc.de \
--cc=akpm@osdl.org \
--cc=andrea@suse.de \
--cc=bhuey@lnxw.com \
--cc=bruce@andrew.cmu.edu \
--cc=dwalker@mvista.com \
--cc=hch@infradead.org \
--cc=karim@opersys.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=nickpiggin@yahoo.com.au \
--cc=paulmck@us.ibm.com \
--cc=pmarques@grupopie.com \
--cc=sdietrich@mvista.com \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox