public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* RE: Maximum frequency of re-scheduling (minimum time quantum) que stio n
@ 2004-07-05 14:18 Povolotsky, Alexander
  2004-07-05 23:26 ` Peter Williams
  0 siblings, 1 reply; 17+ messages in thread
From: Povolotsky, Alexander @ 2004-07-05 14:18 UTC (permalink / raw)
  To: 'linux-kernel@vger.kernel.org'; +Cc: 'Mike Galbraith'

Hello Mike,

Thanks for replying/answering !

>but as noted, wakeup of higher priority 
>threads can preempt current at (almost) any time, so a slice may be spread 
>over an indeterminate amount of time.  

Mike - the part of my original question was - what is the minimum "measure"
(in time ticks or is fraction of the time tick ?) of that "(almost) any time
? In another words, assuming what is the latency between the moment, when
the higher priority process (or thread ) is becoming available  to run (and
assuming that "schedule()" system call is not explicitly called at that time
...)and the moment when the scheduler STARTS (I am not including context
switch time into the question here) the process of preemtion (start of the
context switch). Is this time  settable (at compile time ) ? 

>If you're looking for an interface into the scheduler that allows you to 
>twiddle slice length 

you mean at the run time (vs compile time), I assume ?

> , there is none.

Thanks,
Alex(ander) Povolotsky

-----Original Message-----
From: Mike Galbraith [mailto:efault@gmx.de]
Sent: Monday, July 05, 2004 9:39 AM
To: Povolotsky, Alexander
Subject: Re: Maximum frequency of re-scheduling (minimum time quantum)
questio n


At 04:13 AM 7/5/2004 -0400, you wrote:
>Hello,
>
>In Linux 2.6 kernel, configured with SCHED_RR, - could rescheduling be set
>to be attempted (and executed when appropriate) at EVERY CLOCK TICK, thus
>allowing the "other" process/thread (if available and ready at the moment)
>with the higher (highest at that time) priority or, otherwise, with the
same
>priority (the "next" process/thread in the same Round Robin queue, from
>which the "current" process/thread was "picked" ) to preempt the "current"
>process/thread ?

Well, you _could_ set (albeit only at compile time) the maximum timeslice 
to be 1 ms if you so desired, that would do the rapid round robin between 
peer threads thing you want.  Note however, that this won't give you a 
predictable 1 ms of cpu though, since a thread of higher priority, once 
awakened, will preempt anything of lower priority, and repeatedly receive 
renewed slices as long as it wants cpu and has not exhausted it's priority 
bonus... lower priority threads can starve.

>If EVERY CLOCK TICK is not conceptually possible (please note, that I am
not
>claiming that frequent rescheduling is "good", I am just asking to what
>measure it is possible ...) - then what is the minimum "rescheduling" time
>quantum (measured in clock ticks) is settable/possible ?
>
>What is the default value (which I presume was chosen as "optimal" ?) ?

Timeslices are normally 100ms, but as noted, wakeup of higher priority 
threads can preempt current at (almost) any time, so a slice may be spread 
over an indeterminate amount of time.  Also note that SCHED_FIFO tasks 
_have_ no slice, so queue rotation only happens at sleep time for this 
class of tasks.

If you're looking for an interface into the scheduler that allows you to 
twiddle slice length, there is none.

         -Mike 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* RE: Maximum frequency of re-scheduling (minimum time quantum) que stio n
       [not found] <313680C9A886D511A06000204840E1CF08F42FD4@whq-msgusr-02.pit .comms.marconi.com>
@ 2004-07-05 15:33 ` Mike Galbraith
  0 siblings, 0 replies; 17+ messages in thread
From: Mike Galbraith @ 2004-07-05 15:33 UTC (permalink / raw)
  To: Povolotsky, Alexander, 'linux-kernel@vger.kernel.org'

At 10:18 AM 7/5/2004 -0400, Povolotsky, Alexander wrote:

>Mike - the part of my original question was - what is the minimum "measure"
>(in time ticks or is fraction of the time tick ?) of that "(almost) any time
>? In another words, assuming what is the latency between the moment, when
>the higher priority process (or thread ) is becoming available  to run (and
>assuming that "schedule()" system call is not explicitly called at that time
>...)and the moment when the scheduler STARTS (I am not including context
>switch time into the question here) the process of preemtion (start of the
>context switch). Is this time  settable (at compile time ) ?

Ah, you want wakeup latency numbers.  Sorry, I don't have any.  I believe 
Andrew and Davide both wrote tools for measuring in the wild, a search of 
the archives should turn up something that will give you the numbers you're 
looking for.

If I'm understanding your question, no, there is no latency guarantee.


> >If you're looking for an interface into the scheduler that allows you to
> >twiddle slice length
>
>you mean at the run time (vs compile time), I assume ?

Yes.

         -Mike 


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Maximum frequency of re-scheduling (minimum time quantum) que stio n
  2004-07-05 14:18 Povolotsky, Alexander
@ 2004-07-05 23:26 ` Peter Williams
  0 siblings, 0 replies; 17+ messages in thread
From: Peter Williams @ 2004-07-05 23:26 UTC (permalink / raw)
  To: Povolotsky, Alexander
  Cc: 'linux-kernel@vger.kernel.org', 'Mike Galbraith'

Povolotsky, Alexander wrote:
> Hello Mike,
> 
> Thanks for replying/answering !
> 
> 
>>but as noted, wakeup of higher priority 
>>threads can preempt current at (almost) any time, so a slice may be spread 
>>over an indeterminate amount of time.  
> 
> 
> Mike - the part of my original question was - what is the minimum "measure"
> (in time ticks or is fraction of the time tick ?) of that "(almost) any time
> ? In another words, assuming what is the latency between the moment, when
> the higher priority process (or thread ) is becoming available  to run (and
> assuming that "schedule()" system call is not explicitly called at that time
> ...)and the moment when the scheduler STARTS (I am not including context
> switch time into the question here) the process of preemtion (start of the
> context switch). Is this time  settable (at compile time ) ? 
> 
> 
>>If you're looking for an interface into the scheduler that allows you to 
>>twiddle slice length 
> 
> 
> you mean at the run time (vs compile time), I assume ?
> 
> 
>>, there is none.
> 
> 
> Thanks,
> Alex(ander) Povolotsky
> 
> -----Original Message-----
> From: Mike Galbraith [mailto:efault@gmx.de]
> Sent: Monday, July 05, 2004 9:39 AM
> To: Povolotsky, Alexander
> Subject: Re: Maximum frequency of re-scheduling (minimum time quantum)
> questio n
> 
> 
> At 04:13 AM 7/5/2004 -0400, you wrote:
> 
>>Hello,
>>
>>In Linux 2.6 kernel, configured with SCHED_RR, - could rescheduling be set
>>to be attempted (and executed when appropriate) at EVERY CLOCK TICK, thus
>>allowing the "other" process/thread (if available and ready at the moment)
>>with the higher (highest at that time) priority or, otherwise, with the
> 
> same
> 
>>priority (the "next" process/thread in the same Round Robin queue, from
>>which the "current" process/thread was "picked" ) to preempt the "current"
>>process/thread ?
> 
> 
> Well, you _could_ set (albeit only at compile time) the maximum timeslice 
> to be 1 ms if you so desired, that would do the rapid round robin between 
> peer threads thing you want.  Note however, that this won't give you a 
> predictable 1 ms of cpu though, since a thread of higher priority, once 
> awakened, will preempt anything of lower priority, and repeatedly receive 
> renewed slices as long as it wants cpu and has not exhausted it's priority 
> bonus... lower priority threads can starve.
> 
> 
>>If EVERY CLOCK TICK is not conceptually possible (please note, that I am
> 
> not
> 
>>claiming that frequent rescheduling is "good", I am just asking to what
>>measure it is possible ...) - then what is the minimum "rescheduling" time
>>quantum (measured in clock ticks) is settable/possible ?
>>
>>What is the default value (which I presume was chosen as "optimal" ?) ?
> 
> 
> Timeslices are normally 100ms, but as noted, wakeup of higher priority 
> threads can preempt current at (almost) any time, so a slice may be spread 
> over an indeterminate amount of time.  Also note that SCHED_FIFO tasks 
> _have_ no slice, so queue rotation only happens at sleep time for this 
> class of tasks.
> 
> If you're looking for an interface into the scheduler that allows you to 
> twiddle slice length, there is none.

By freeing "time slice"s from their involvement in active/expired 
priority array switching etc., the various single priority array 
schedulers (e.g. Con Kolivas's staircase scheduler and my SPA "pb" and 
"eb" schedulers) that are under development raise the possibility of 
allowing the time slice for SCHED_RR tasks to be different to that of 
ordinary tasks or even for it to be set separately for each SCHED_RR 
task.  Whether this is desirable or not is another question.

If there is a genuine desire to experiment with such features let me 
know and I can produce an experimental scheduler with this functionality 
for you to play with?

Peter
-- 
Peter Williams                                   pwil3058@bigpond.net.au

"Learning, n. The kind of ignorance distinguishing the studious."
  -- Ambrose Bierce


^ permalink raw reply	[flat|nested] 17+ messages in thread

* RE: Maximum frequency of re-scheduling (minimum time quantum) que stio n
@ 2004-07-07  7:59 Povolotsky, Alexander
  2004-07-07  8:30 ` Bernd Eckenfels
  2004-07-07  8:59 ` Elladan
  0 siblings, 2 replies; 17+ messages in thread
From: Povolotsky, Alexander @ 2004-07-07  7:59 UTC (permalink / raw)
  To: 'Mike Galbraith', 'elladan@eskimo.com'
  Cc: 'linux-kernel@vger.kernel.org'

Thanks to both of you for answering !

>The catch here is, without the preemptable kernel option, the kernel
>can't preempt itself, so if the first process was doing something in the
>kernel, there'd be a delay.  Even with the option, it can't preempt
>itself inside of a critical section, so there will still be a (shorter)
>delay.

Yes, I am aware, - thanks to the previous answer (not included here), about
this Linux 2.6
configurable "preemptable kernel" option and was assuming it is configured
and in effect.

>In addition, the kernel can only preempt if something happens which lets
>it check its state.  Unless the low priority process makes some system
calls, 

does above means that "some system calls" have internal "built-in"
schedule()
call within their implementation ?
Is there (anywhere) documentation which lists all
system calls, which internally invoke the scheduler via calling schedule()
function call?

>the only thing that will trigger this is the timer interrupt
>which runs at eg. 100 or 400hz typically.

So I think that above is anwering my original question, that in the "worst
case" scenario - unless the rescheduling is induced earlier by explicit or
implicit (via certain system calls) invokation of the schedule() function
call, - the attempt of rescheduling (again, of course, by calling schedule()
function call) will be done at least at every "clock tick time" (say every
10 ms, which is default value)  ?

Thanks,
Best Regards,
Alexander Povolotsky

-----Original Message-----
From: Mike Galbraith [mailto:efault@gmx.de]
Sent: Monday, July 05, 2004 11:33 AM
To: Povolotsky, Alexander; 'linux-kernel@vger.kernel.org'
Subject: RE: Maximum frequency of re-scheduling (minimum time quantum)
que stio n


At 10:18 AM 7/5/2004 -0400, Povolotsky, Alexander wrote:

>Mike - the part of my original question was - what is the minimum "measure"
>(in time ticks or is fraction of the time tick ?) of that "(almost) any
time"
>? In another words, what is the latency between the moment, when
>the higher priority process (or thread ) is becoming available to run
>(assuming that "schedule()" system call is not explicitly called at that
time
>...) and the moment when the scheduler STARTS (I am not including context
>switch time into the question here) the process of preemtion (the start of
the
>context switch). Is this time settable (at compile time ) ?

Ah, you want wakeup latency numbers.  Sorry, I don't have any.  I believe 
Andrew and David both wrote tools for measuring in the wild, a search of 
the archives should turn up something that will give you the numbers you're 
looking for.

If I'm understanding your question, no, there is no latency guarantee.

> >If you're looking for an interface into the scheduler that allows you to
> >twiddle slice length
>
>you mean at the run time (vs compile time), I assume ?

Yes.

         -Mike 
-----Original Message-----
From: Elladan [mailto:elladan@eskimo.com]
Sent: Monday, July 05, 2004 10:51 PM
To: Povolotsky, Alexander
Subject: Re: Maximum frequency of re-scheduling (minimum time
quantum)question

Your question here isn't about time slices, it's about preemption
latency.

A time slice refers to the time in which the OS will hand to a process to
run,
provided no other operation preempts it during its execution.

If a higher priority task becomes runnable, the kernel will switch
execution to the task at that moment as best it can.  Eg., an interrupt
happens which causes an event that wakes up the high priority task.
Before returning to the first process, the kernel will determine that it
should switch, and will return to the high priority process instead.

The catch here is, without the preemptable kernel option, the kernel
can't preempt itself, so if the first process was doing something in the
kernel, there'd be a delay.  Even with the option, it can't preempt
itself inside of a critical section, so there will still be a (shorter)
delay.

The kernel is not real-time, so there is no guarantee about how long the
delay might be.  The critical sections, for instance, might persist for
a (relatively) long time.

In addition, the kernel can only preempt if something happens which lets
it check its state.  Unless the low priority process makes some system
calls, the only thing that will trigger this is the timer interrupt
which runs at eg. 100 or 400hz typically.

-J
-----Original Message-----
From: Mike Galbraith [mailto:efault@gmx.de]
Sent: Monday, July 05, 2004 9:39 AM
To: Povolotsky, Alexander
Subject: Re: Maximum frequency of re-scheduling (minimum time quantum)
questio n


At 04:13 AM 7/5/2004 -0400, you wrote:
>Hello,
>
>In Linux 2.6 kernel, configured with SCHED_RR, - could rescheduling be set
>to be attempted (and executed when appropriate) at EVERY CLOCK TICK, thus
>allowing the "other" process/thread (if available and ready at the moment)
>with the higher (highest at that time) priority or, otherwise, with the
same
>priority (the "next" process/thread in the same Round Robin queue, from
>which the "current" process/thread was "picked" ) to preempt the "current"
>process/thread ?

Well, you _could_ set (albeit only at compile time) the maximum timeslice 
to be 1 ms if you so desired, that would do the rapid round robin between 
peer threads thing you want.  Note however, that this won't give you a 
predictable 1 ms of cpu though, since a thread of higher priority, once 
awakened, will preempt anything of lower priority, and repeatedly receive 
renewed slices as long as it wants cpu and has not exhausted it's priority 
bonus... lower priority threads can starve.

>If EVERY CLOCK TICK is not conceptually possible (please note, that I am
not
>claiming that frequent rescheduling is "good", I am just asking to what
>measure it is possible ...) - then what is the minimum "rescheduling" time
>quantum (measured in clock ticks) is settable/possible ?
>
>What is the default value (which I presume was chosen as "optimal" ?) ?

Timeslices are normally 100ms, but as noted, wakeup of higher priority 
threads can preempt current at (almost) any time, so a slice may be spread 
over an indeterminate amount of time.  Also note that SCHED_FIFO tasks 
_have_ no slice, so queue rotation only happens at sleep time for this 
class of tasks.

If you're looking for an interface into the scheduler that allows you to 
twiddle slice length, there is none.

         -Mike 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Maximum frequency of re-scheduling (minimum time quantum) que stio n
  2004-07-07  7:59 Maximum frequency of re-scheduling (minimum time quantum) " Povolotsky, Alexander
@ 2004-07-07  8:30 ` Bernd Eckenfels
  2004-07-07  8:59 ` Elladan
  1 sibling, 0 replies; 17+ messages in thread
From: Bernd Eckenfels @ 2004-07-07  8:30 UTC (permalink / raw)
  To: linux-kernel

In article <313680C9A886D511A06000204840E1CF08F42FD7@whq-msgusr-02.pit.comms.marconi.com> you wrote:
> So I think that above is anwering my original question, that in the "worst
> case" scenario - unless the rescheduling is induced earlier by explicit or
> implicit (via certain system calls) invokation of the schedule() function
> call, - the attempt of rescheduling (again, of course, by calling schedule()
> function call) will be done at least at every "clock tick time" (say every
> 10 ms, which is default value)  ?

Only if the interrupts are not disabled, which can happen if another
interrupt handlers bottom half takes too long. And then you still have the
critical sections in kernel, or the big kernel lock which may block the next
syscall.

Gruss
Bernd
-- 
eckes privat - http://www.eckes.org/
Project Freefire - http://www.freefire.org/

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Maximum frequency of re-scheduling (minimum time quantum) que stio n
  2004-07-07  7:59 Maximum frequency of re-scheduling (minimum time quantum) " Povolotsky, Alexander
  2004-07-07  8:30 ` Bernd Eckenfels
@ 2004-07-07  8:59 ` Elladan
  2004-07-07 10:26   ` Con Kolivas
  1 sibling, 1 reply; 17+ messages in thread
From: Elladan @ 2004-07-07  8:59 UTC (permalink / raw)
  To: Povolotsky, Alexander
  Cc: 'Mike Galbraith', 'elladan@eskimo.com',
	'linux-kernel@vger.kernel.org'

On Wed, Jul 07, 2004 at 03:59:01AM -0400, Povolotsky, Alexander wrote:
> Thanks to both of you for answering !
> 
> >The catch here is, without the preemptable kernel option, the kernel
> >can't preempt itself, so if the first process was doing something in the
> >kernel, there'd be a delay.  Even with the option, it can't preempt
> >itself inside of a critical section, so there will still be a (shorter)
> >delay.
> 
> Yes, I am aware, - thanks to the previous answer (not included here), about
> this Linux 2.6
> configurable "preemptable kernel" option and was assuming it is configured
> and in effect.

Note that the preemptable kernel gives you no guarantee of latency,
though it does reduce the average latency.  A different patch was
constructed in the 2.4 era which attempted to provide guaranteed latency
through a different approach (effectively, having all long-running
operations yield).

> >In addition, the kernel can only preempt if something happens which lets
> >it check its state.  Unless the low priority process makes some system
> calls, 
> 
> does above means that "some system calls" have internal "built-in"
> schedule()
> call within their implementation ?
> Is there (anywhere) documentation which lists all
> system calls, which internally invoke the scheduler via calling schedule()
> function call?

All system calls execute schedule() before returning to user space, if
schedule has been requested.  It's done in the system call handler.  An
interrupt will also schedule if necessary.

In addition, if you have a preemptable kernel, schedule() may be
executed on demand at any time the kernel isn't in a critical section,
or upon exiting a critical section if it is.

> >the only thing that will trigger this is the timer interrupt
> >which runs at eg. 100 or 400hz typically.
> 
> So I think that above is anwering my original question, that in the "worst
> case" scenario - unless the rescheduling is induced earlier by explicit or
> implicit (via certain system calls) invokation of the schedule() function
> call, - the attempt of rescheduling (again, of course, by calling schedule()
> function call) will be done at least at every "clock tick time" (say every
> 10 ms, which is default value)  ?

The reschedule will be done whenever the kernel does it.  There is no
guaranteed worst case.  It's just based on "best effort."

If an interrupt from some device, or the timer interrupt, causes a
high-priority process to become runnable, the kernel will attempt to
schedule as soon as possible.  With a preemptable kernel, that will be
as soon as it releases all locks and can thus be safely interrupted.
But the kernel makes no guarantee that locks won't be held for long
periods of time, so the worst case latency is the longest possible
duration of an operation in the kernel.  That could be quite long (many
milliseconds) if it's walking tables, hitting worst-case hash behavior,
etc.

The thing to note about the timer is that, if you have no external
interrupt driving your wakeup, then you're waking up based on some sort
of timer.  The best resolution you can get from a timer is approximately
based on HZ.  In 2.6 kernels, the interrupt frequency is 1000hz, but
ticks are represented to userspace as if they were 100hz.  Also, you may
have an RTC available if you need high frequency interrupts.

-J


^ permalink raw reply	[flat|nested] 17+ messages in thread

* RE: Maximum frequency of re-scheduling (minimum time quantum) que stio n
@ 2004-07-07  9:48 Povolotsky, Alexander
  2004-07-07 15:52 ` Elladan
  0 siblings, 1 reply; 17+ messages in thread
From: Povolotsky, Alexander @ 2004-07-07  9:48 UTC (permalink / raw)
  To: 'Elladan'
  Cc: 'Mike Galbraith', 'linux-kernel@vger.kernel.org'

Thanks again (and sorry for annoying yet again)!

>If an interrupt from some device, or the timer interrupt, causes a
>high-priority process to become runnable 

I guess, you are referring above to what is also called "process wake-up" or
"process awakening" ?
What are the specific means of effecting "process wake-up" (say at the
interrupt time) ?

>All system calls execute schedule() before returning to user space, if
>schedule has been requested.  It's done in the system call handler.  An
>interrupt will also schedule if necessary.

Does above suggest that OS calls (with the schedule NOT being requested) are
permissable to be executed from the ISR (Interrupt Service Routine)?
... - I have been told that anything, that invokes the scheduler is not
callable from the ISR ...

Best Regards,
Alex Povolotsky
-----Original Message-----
From: Elladan [mailto:elladan@eskimo.com]
Sent: Wednesday, July 07, 2004 4:59 AM
To: Povolotsky, Alexander
Cc: 'Mike Galbraith'; 'elladan@eskimo.com';
'linux-kernel@vger.kernel.org'
Subject: Re: Maximum frequency of re-scheduling (minimum time quantum)
que stio n


On Wed, Jul 07, 2004 at 03:59:01AM -0400, Povolotsky, Alexander wrote:
> Thanks to both of you for answering !
> 
> >The catch here is, without the preemptable kernel option, the kernel
> >can't preempt itself, so if the first process was doing something in the
> >kernel, there'd be a delay.  Even with the option, it can't preempt
> >itself inside of a critical section, so there will still be a (shorter)
> >delay.
> 
> Yes, I am aware, - thanks to the previous answer (not included here), -
about
> this Linux 2.6 configurable "preemptable kernel" option and was assuming
it is configured
> and in effect.

Note that the preemptable kernel gives you no guarantee of latency,
though it does reduce the average latency.  A different patch was
constructed in the 2.4 era which attempted to provide guaranteed latency
through a different approach (effectively, having all long-running
operations yield).

> >In addition, the kernel can only preempt if something happens which lets
> >it check its state.  Unless the low priority process makes some system
calls, 
> 
> does above means that "system calls" have internal "built-in"
> schedule() call within their implementation ?

All system calls execute schedule() before returning to user space, if
schedule has been requested.  It's done in the system call handler.  An
interrupt will also schedule if necessary.

In addition, if you have a preemptable kernel, schedule() may be
executed on demand at any time the kernel isn't in a critical section,
or upon exiting a critical section if it is.

> >the only thing that will trigger this is the timer interrupt
> >which runs at eg. 100 or 400hz typically.
> 
> So I think that above is anwering my original question, that in the "worst
> case" scenario - unless the rescheduling is induced earlier by explicit or
> implicit (via system calls) invokation of the schedule() function
> call, - the attempt of rescheduling (again, of course, by calling
schedule()
> function call) will be done at least at every "clock tick time" (say every
> 10 ms, which is default value)  ?

The reschedule will be done whenever the kernel does it.  There is no
guaranteed worst case.  It's just based on "best effort."

If an interrupt from some device, or the timer interrupt, causes a
high-priority process to become runnable, the kernel will attempt to
schedule as soon as possible.  With a preemptable kernel, that will be
as soon as it releases all locks and can thus be safely interrupted.
But the kernel makes no guarantee that locks won't be held for long
periods of time, so the worst case latency is the longest possible
duration of an operation in the kernel.  That could be quite long (many
milliseconds) if it's walking tables, hitting worst-case hash behavior,
etc.

The thing to note about the timer is that, if you have no external
interrupt driving your wakeup, then you're waking up based on some sort
of timer.  The best resolution you can get from a timer is approximately
based on HZ.  In 2.6 kernels, the interrupt frequency is 1000hz, but
ticks are represented to userspace as if they were 100hz.  Also, you may
have an RTC available if you need high frequency interrupts.

-J
----Original Message-----
From: Mike Galbraith [mailto:efault@gmx.de]
Sent: Monday, July 05, 2004 11:33 AM
To: Povolotsky, Alexander; 'linux-kernel@vger.kernel.org'
Subject: RE: Maximum frequency of re-scheduling (minimum time quantum)
que stio n


At 10:18 AM 7/5/2004 -0400, Povolotsky, Alexander wrote:

>Mike - the part of my original question was - what is the minimum "measure"
>(in time ticks or is fraction of the time tick ?) of that "(almost) any
time"
>? In another words, what is the latency between the moment, when
>the higher priority process (or thread ) is becoming available to run
>(assuming that "schedule()" system call is not explicitly called at that
time
>...) and the moment when the scheduler STARTS (I am not including context
>switch time into the question here) the process of preemtion (the start of
the
>context switch). Is this time settable (at compile time ) ?

Ah, you want wakeup latency numbers.  Sorry, I don't have any.  I believe 
Andrew and David both wrote tools for measuring in the wild, a search of 
the archives should turn up something that will give you the numbers you're 
looking for.

If I'm understanding your question, no, there is no latency guarantee.

> >If you're looking for an interface into the scheduler that allows you to
> >twiddle slice length
>
>you mean at the run time (vs compile time), I assume ?

Yes.

         -Mike 
-----Original Message-----
From: Elladan [mailto:elladan@eskimo.com]
Sent: Monday, July 05, 2004 10:51 PM
To: Povolotsky, Alexander
Subject: Re: Maximum frequency of re-scheduling (minimum time quantum)
questi n


Your question here isn't about time slices, it's about preemption
latency.

A time slice refers to the time the OS will hand to a process to run,
provided no other operation preempts it during its execution.

If a higher priority task becomes runnable, the kernel will switch
execution to the task at that moment as best it can.  Eg., an interrupt
happens which causes an event that wakes up the high priority task.
Before returning to the first process, the kernel will determine that it
should switch, and will return to the high priority process instead.

The catch here is, without the preemptable kernel option, the kernel
can't preempt itself, so if the first process was doing something in the
kernel, there'd be a delay.  Even with the option, it can't preempt
itself inside of a critical section, so there will still be a (shorter)
delay.

The kernel is not real-time, so there is no guarantee about how long the
delay might be.  The critical sections, for instance, might persist for
a (relatively) long time.

In addition, the kernel can only preempt if something happens which lets
it check its state.  Unless the low priority process makes some system
calls, the only thing that will trigger this is the timer interrupt
which runs at eg. 100 or 400hz typically.

-J
-----Original Message-----
From: Mike Galbraith [mailto:efault@gmx.de]
Sent: Monday, July 05, 2004 9:39 AM
To: Povolotsky, Alexander
Subject: Re: Maximum frequency of re-scheduling (minimum time quantum)
questio n


At 04:13 AM 7/5/2004 -0400, you wrote:
>Hello,
>
>In Linux 2.6 kernel, configured with SCHED_RR, - could rescheduling be set
>to be attempted (and executed when appropriate) at EVERY CLOCK TICK, thus
>allowing the "other" process/thread (if available and ready at the moment)
>with the higher (highest at that time) priority or, otherwise, with the
same
>priority (the "next" process/thread in the same Round Robin queue, from
>which the "current" process/thread was "picked" ) to preempt the "current"
>process/thread ?

Well, you _could_ set (albeit only at compile time) the maximum timeslice 
to be 1 ms if you so desired, that would do the rapid round robin between 
peer threads thing you want.  Note however, that this won't give you a 
predictable 1 ms of cpu though, since a thread of higher priority, once 
awakened, will preempt anything of lower priority, and repeatedly receive 
renewed slices as long as it wants cpu and has not exhausted it's priority 
bonus... lower priority threads can starve.

>If EVERY CLOCK TICK is not conceptually possible (please note, that I am
not
>claiming that frequent rescheduling is "good", I am just asking to what
>measure it is possible ...) - then what is the minimum "rescheduling" time
>quantum (measured in clock ticks) is settable/possible ?
>
>What is the default value (which I presume was chosen as "optimal" ?) ?

Timeslices are normally 100ms, but as noted, wakeup of higher priority 
threads can preempt current at (almost) any time, so a slice may be spread 
over an indeterminate amount of time.  Also note that SCHED_FIFO tasks 
_have_ no slice, so queue rotation only happens at sleep time for this 
class of tasks.

If you're looking for an interface into the scheduler that allows you to 
twiddle slice length, there is none.

         -Mike 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Maximum frequency of re-scheduling (minimum time quantum) que stio n
  2004-07-07  8:59 ` Elladan
@ 2004-07-07 10:26   ` Con Kolivas
  0 siblings, 0 replies; 17+ messages in thread
From: Con Kolivas @ 2004-07-07 10:26 UTC (permalink / raw)
  To: Elladan
  Cc: Povolotsky, Alexander, 'Mike Galbraith',
	'linux-kernel@vger.kernel.org'

[-- Attachment #1: Type: text/plain, Size: 1138 bytes --]

Elladan wrote:
> On Wed, Jul 07, 2004 at 03:59:01AM -0400, Povolotsky, Alexander wrote:
> 
>>Thanks to both of you for answering !
>>
>>
>>>The catch here is, without the preemptable kernel option, the kernel
>>>can't preempt itself, so if the first process was doing something in the
>>>kernel, there'd be a delay.  Even with the option, it can't preempt
>>>itself inside of a critical section, so there will still be a (shorter)
>>>delay.
>>
>>Yes, I am aware, - thanks to the previous answer (not included here), about
>>this Linux 2.6
>>configurable "preemptable kernel" option and was assuming it is configured
>>and in effect.
> 
> 
> Note that the preemptable kernel gives you no guarantee of latency,
> though it does reduce the average latency.  A different patch was
> constructed in the 2.4 era which attempted to provide guaranteed latency
> through a different approach (effectively, having all long-running
> operations yield).

2.6 is not that different from the lowlat patches. Note that many of 
these lock breaking points and conditional rescheduling were actually 
put into 2.5 development so are in 2.6 mainline.

Con

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 256 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Maximum frequency of re-scheduling (minimum time quantum) que stio n
  2004-07-07  9:48 Maximum frequency of re-scheduling (minimum time quantum) que stio n Povolotsky, Alexander
@ 2004-07-07 15:52 ` Elladan
  0 siblings, 0 replies; 17+ messages in thread
From: Elladan @ 2004-07-07 15:52 UTC (permalink / raw)
  To: Povolotsky, Alexander
  Cc: 'Elladan', 'Mike Galbraith',
	'linux-kernel@vger.kernel.org'

On Wed, Jul 07, 2004 at 05:48:17AM -0400, Povolotsky, Alexander wrote:
> Thanks again (and sorry for annoying yet again)!
> 
> >If an interrupt from some device, or the timer interrupt, causes a
> >high-priority process to become runnable 
> 
> I guess, you are referring above to what is also called "process wake-up" or
> "process awakening" ?
> What are the specific means of effecting "process wake-up" (say at the
> interrupt time) ?

If the high priority process was blocked waiting for something (eg.,
data from a device) and the interrupt causes data to be available on the
object the process is blocked on, then the high priority process will be
woken up and will then become runnable again.

> >All system calls execute schedule() before returning to user space, if
> >schedule has been requested.  It's done in the system call handler.  An
> >interrupt will also schedule if necessary.
> 
> Does above suggest that OS calls (with the schedule NOT being requested) are
> permissable to be executed from the ISR (Interrupt Service Routine)?
> ... - I have been told that anything, that invokes the scheduler is not
> callable from the ISR ...

The top half interrupt handler just basically puts the interrupt on a
list.  The bottom half handler then runs with a full context and can
schedule.

If the current process was already executing in the kernel, then the
bottom half is delayed until the end of the current call.  Otherwise, it
happens immediately afterward.

-J

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Maximum frequency of re-scheduling (minimum time quantum ) que stio n
  2004-07-08 13:01 Povolotsky, Alexander
@ 2004-07-08 23:32 ` Peter Williams
  2004-07-08 23:41   ` William Lee Irwin III
  2004-07-09  1:46   ` Nick Piggin
  0 siblings, 2 replies; 17+ messages in thread
From: Peter Williams @ 2004-07-08 23:32 UTC (permalink / raw)
  To: Povolotsky, Alexander
  Cc: 'linux-kernel@vger.kernel.org', 'Mike Galbraith',
	'akpm@osdl.org', 'rml@tech9.net',
	'Ingo Molnar', 'Con Kolivas', 'Elladan',
	'Chris Siebenmann'

Povolotsky, Alexander wrote:
> Hi Peter,
> 
> 
>>By freeing "time slice"s from their involvement in active/expired 
>>priority array switching etc., the various single priority array 
>>schedulers (e.g. Con Kolivas's staircase scheduler and my SPA "pb" and 
>>"eb" schedulers) that are under development raise the possibility of 
>>allowing the time slice for SCHED_RR tasks to be different to that of 
>>ordinary tasks or even for it to be set separately for each SCHED_RR 
>>task.  Whether this is desirable or not is another question.
> 
> 
> IMHO (I am new in Linux),- if this functionality could be either optionally
> configured at compile time or be optionally invokable at run time (or
> combination of both) - why not to have it ? - this addition enhances choices
> of scheduling,
> which is good.
> 
> Is there a chance such functionality will make into Linux 2.6 as a patch (at
> some later time) ?

Not until the current scheduler is replaced with a single priority array 
scheduler.  However, if there's enough interest, I could add this 
functionality to the CPU scheduler evaluation patch so that people could 
experiment with it (BUT it would be at the bottom of my to do list).

> 
> By the way - what is the "mechanism" of decision making process (among Linux
> kernel developers) on such things ?

I'll leave this question to someone more knowledgeable.

Pete
-- 
Peter Williams                                   pwil3058@bigpond.net.au

"Learning, n. The kind of ignorance distinguishing the studious."
  -- Ambrose Bierce


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Maximum frequency of re-scheduling (minimum time quantum ) que stio n
  2004-07-08 23:32 ` Peter Williams
@ 2004-07-08 23:41   ` William Lee Irwin III
  2004-07-09  1:46   ` Nick Piggin
  1 sibling, 0 replies; 17+ messages in thread
From: William Lee Irwin III @ 2004-07-08 23:41 UTC (permalink / raw)
  To: Peter Williams
  Cc: Povolotsky, Alexander, 'linux-kernel@vger.kernel.org',
	'Mike Galbraith', 'akpm@osdl.org',
	'rml@tech9.net', 'Ingo Molnar',
	'Con Kolivas', 'Elladan',
	'Chris Siebenmann'

Povolotsky, Alexander wrote:
>> Is there a chance such functionality will make into Linux 2.6 as a patch 
>> (at some later time) ?

On Fri, Jul 09, 2004 at 09:32:16AM +1000, Peter Williams wrote:
> Not until the current scheduler is replaced with a single priority array 
> scheduler.  However, if there's enough interest, I could add this 
> functionality to the CPU scheduler evaluation patch so that people could 
> experiment with it (BUT it would be at the bottom of my to do list).

Well, this is in part because it makes the assumption of such a data
structure and then does if (scheduler_type == FOO) { /* FOO's thing */ }
in the midst of various manipulations of the struture instead of having
methods for higher-level scheduler operations.

As certain reputable news sources have said, I wish people would write
more ambitious scheduler patches. =)


-- wli

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Maximum frequency of re-scheduling (minimum time quantum ) que stio n
  2004-07-08 23:32 ` Peter Williams
  2004-07-08 23:41   ` William Lee Irwin III
@ 2004-07-09  1:46   ` Nick Piggin
  2004-07-09  1:57     ` Andrew Morton
  2004-07-09  3:04     ` Peter Williams
  1 sibling, 2 replies; 17+ messages in thread
From: Nick Piggin @ 2004-07-09  1:46 UTC (permalink / raw)
  To: Peter Williams
  Cc: Povolotsky, Alexander, 'linux-kernel@vger.kernel.org',
	'Mike Galbraith', 'akpm@osdl.org',
	'rml@tech9.net', 'Ingo Molnar',
	'Con Kolivas', 'Elladan',
	'Chris Siebenmann'

Peter Williams wrote:
> Povolotsky, Alexander wrote:
> 
>> Hi Peter,
>>
>>
>>> By freeing "time slice"s from their involvement in active/expired 
>>> priority array switching etc., the various single priority array 
>>> schedulers (e.g. Con Kolivas's staircase scheduler and my SPA "pb" 
>>> and "eb" schedulers) that are under development raise the possibility 
>>> of allowing the time slice for SCHED_RR tasks to be different to that 
>>> of ordinary tasks or even for it to be set separately for each 
>>> SCHED_RR task.  Whether this is desirable or not is another question.
>>
>>
>>
>> IMHO (I am new in Linux),- if this functionality could be either 
>> optionally
>> configured at compile time or be optionally invokable at run time (or
>> combination of both) - why not to have it ? - this addition enhances 
>> choices
>> of scheduling,
>> which is good.
>>
>> Is there a chance such functionality will make into Linux 2.6 as a 
>> patch (at
>> some later time) ?
> 
> 
> Not until the current scheduler is replaced with a single priority array 
> scheduler.  However, if there's enough interest, I could add this 
> functionality to the CPU scheduler evaluation patch so that people could 
> experiment with it (BUT it would be at the bottom of my to do list).

You are mistaken. The current scheduler only uses a single array
for realtime tasks. Functionality would be trivial to implement
now.

> 
>>
>> By the way - what is the "mechanism" of decision making process (among 
>> Linux
>> kernel developers) on such things ?
> 
> 
> I'll leave this question to someone more knowledgeable.
> 

I'd defer a final decision to others more knowlegeable of course
(Ingo, Andrew, Linus?), however it would be almost out of the
question to do a wholesale replacement in 2.6.

However well tested your scheduler might be, it needs several
orders of magnitude more testing ;) Maybe the best we can hope
for is compile time selectable alternatives.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Maximum frequency of re-scheduling (minimum time quantum ) que stio n
  2004-07-09  1:46   ` Nick Piggin
@ 2004-07-09  1:57     ` Andrew Morton
  2004-07-09  4:18       ` Con Kolivas
  2004-07-09  3:04     ` Peter Williams
  1 sibling, 1 reply; 17+ messages in thread
From: Andrew Morton @ 2004-07-09  1:57 UTC (permalink / raw)
  To: Nick Piggin
  Cc: pwil3058, Alexander.Povolotsky, linux-kernel, efault, rml, mingo,
	kernel, elladan, cks

Nick Piggin <nickpiggin@yahoo.com.au> wrote:
>
> However well tested your scheduler might be, it needs several
>  orders of magnitude more testing ;) Maybe the best we can hope
>  for is compile time selectable alternatives.

At this stage in the kernel lifecycle, for something as fiddly as the CPU
scheduler we really should be 100% driven by problem reporting.

If someone can identify a particular misbehaviour in the CPU scheduler then
they should put their editor away and work to produce a solid testcase. 
Armed with that, we can then identify the source of the particular problem.

It is at this point, and no earlier, that we can decide what an appropriate
solution is.  We then balance the risk of that solution against the severity
of the problem which it solves and make a decision as to whether to proceed.

Right now, the ratio of quality bug reporting to scheduler patching is
bizarrely small.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Maximum frequency of re-scheduling (minimum time quantum ) que stio n
  2004-07-09  1:46   ` Nick Piggin
  2004-07-09  1:57     ` Andrew Morton
@ 2004-07-09  3:04     ` Peter Williams
  1 sibling, 0 replies; 17+ messages in thread
From: Peter Williams @ 2004-07-09  3:04 UTC (permalink / raw)
  To: Nick Piggin
  Cc: Povolotsky, Alexander, 'linux-kernel@vger.kernel.org',
	'Mike Galbraith', 'akpm@osdl.org',
	'rml@tech9.net', 'Ingo Molnar',
	'Con Kolivas', 'Elladan',
	'Chris Siebenmann'

Nick Piggin wrote:
> Peter Williams wrote:
> 
>> Povolotsky, Alexander wrote:
>>
>>> Hi Peter,
>>>
>>>
>>>> By freeing "time slice"s from their involvement in active/expired 
>>>> priority array switching etc., the various single priority array 
>>>> schedulers (e.g. Con Kolivas's staircase scheduler and my SPA "pb" 
>>>> and "eb" schedulers) that are under development raise the 
>>>> possibility of allowing the time slice for SCHED_RR tasks to be 
>>>> different to that of ordinary tasks or even for it to be set 
>>>> separately for each SCHED_RR task.  Whether this is desirable or not 
>>>> is another question.
>>>
>>>
>>>
>>>
>>> IMHO (I am new in Linux),- if this functionality could be either 
>>> optionally
>>> configured at compile time or be optionally invokable at run time (or
>>> combination of both) - why not to have it ? - this addition enhances 
>>> choices
>>> of scheduling,
>>> which is good.
>>>
>>> Is there a chance such functionality will make into Linux 2.6 as a 
>>> patch (at
>>> some later time) ?
>>
>>
>>
>> Not until the current scheduler is replaced with a single priority 
>> array scheduler.  However, if there's enough interest, I could add 
>> this functionality to the CPU scheduler evaluation patch so that 
>> people could experiment with it (BUT it would be at the bottom of my 
>> to do list).
> 
> 
> You are mistaken. The current scheduler only uses a single array
> for realtime tasks. Functionality would be trivial to implement
> now.

OK.

> 
>>
>>>
>>> By the way - what is the "mechanism" of decision making process 
>>> (among Linux
>>> kernel developers) on such things ?
>>
>>
>>
>> I'll leave this question to someone more knowledgeable.
>>
> 
> I'd defer a final decision to others more knowlegeable of course
> (Ingo, Andrew, Linus?), however it would be almost out of the
> question to do a wholesale replacement in 2.6.
> 
> However well tested your scheduler might be, it needs several
> orders of magnitude more testing ;)

I agree (not to mention more experimenting to find good default values 
for some of the control parameters).  I was just passing on someone 
else's question.

> Maybe the best we can hope
> for is compile time selectable alternatives.

I'm hoping that a lot of the current configurable parameters in my 
schedulers can become constants.  For most of them, the reason that they 
are now configurable is to make trying out different values easier (i.e. 
no need to recompile and reboot).

Peter
-- 
Peter Williams                                   pwil3058@bigpond.net.au

"Learning, n. The kind of ignorance distinguishing the studious."
  -- Ambrose Bierce


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Maximum frequency of re-scheduling (minimum time quantum ) que stio n
  2004-07-09  1:57     ` Andrew Morton
@ 2004-07-09  4:18       ` Con Kolivas
  2004-07-09  4:48         ` Andrew Morton
  0 siblings, 1 reply; 17+ messages in thread
From: Con Kolivas @ 2004-07-09  4:18 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Nick Piggin, pwil3058, Alexander.Povolotsky, linux-kernel, efault,
	rml, mingo, elladan, cks

Andrew Morton writes:

> Nick Piggin <nickpiggin@yahoo.com.au> wrote:
>>
>> However well tested your scheduler might be, it needs several
>>  orders of magnitude more testing ;) Maybe the best we can hope
>>  for is compile time selectable alternatives.
> 
> At this stage in the kernel lifecycle, for something as fiddly as the CPU
> scheduler we really should be 100% driven by problem reporting.
> 
> If someone can identify a particular misbehaviour in the CPU scheduler then
> they should put their editor away and work to produce a solid testcase. 
> Armed with that, we can then identify the source of the particular problem.
> 
> It is at this point, and no earlier, that we can decide what an appropriate
> solution is.  We then balance the risk of that solution against the severity
> of the problem which it solves and make a decision as to whether to proceed.
> 
> Right now, the ratio of quality bug reporting to scheduler patching is
> bizarrely small.

Is "for fun" not reason enough?

I'm still keeping an eye out for firm "behavioural" bug reports on 2.6 and 
would discuss or address them.

Seriously the only reason I went down the rewrite path was to address 
complaints about the complexity of the current design. It was also an 
opportunity to start implementing some requested features. I certainly have 
never suggested it should even be considered for 2.6.

Con


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Maximum frequency of re-scheduling (minimum time quantum ) que stio n
  2004-07-09  4:18       ` Con Kolivas
@ 2004-07-09  4:48         ` Andrew Morton
  0 siblings, 0 replies; 17+ messages in thread
From: Andrew Morton @ 2004-07-09  4:48 UTC (permalink / raw)
  To: Con Kolivas
  Cc: nickpiggin, pwil3058, Alexander.Povolotsky, linux-kernel, efault,
	rml, mingo, elladan, cks

Con Kolivas <kernel@kolivas.org> wrote:
>
> Andrew Morton writes:
> 
> > Nick Piggin <nickpiggin@yahoo.com.au> wrote:
> >>
> >> However well tested your scheduler might be, it needs several
> >>  orders of magnitude more testing ;) Maybe the best we can hope
> >>  for is compile time selectable alternatives.
> > 
> > At this stage in the kernel lifecycle, for something as fiddly as the CPU
> > scheduler we really should be 100% driven by problem reporting.
> > 
> > If someone can identify a particular misbehaviour in the CPU scheduler then
> > they should put their editor away and work to produce a solid testcase. 
> > Armed with that, we can then identify the source of the particular problem.
> > 
> > It is at this point, and no earlier, that we can decide what an appropriate
> > solution is.  We then balance the risk of that solution against the severity
> > of the problem which it solves and make a decision as to whether to proceed.
> > 
> > Right now, the ratio of quality bug reporting to scheduler patching is
> > bizarrely small.
> 
> Is "for fun" not reason enough?

Sure.   "For 2.7" is a good reason, too.

> I'm still keeping an eye out for firm "behavioural" bug reports on 2.6 and 
> would discuss or address them.

OK, thanks.

> Seriously the only reason I went down the rewrite path was to address 
> complaints about the complexity of the current design. It was also an 
> opportunity to start implementing some requested features. I certainly have 
> never suggested it should even be considered for 2.6.

Yeah, I tend to think that the CPU scheduler is currently 90% good enough,
and does seem to have become rather opaque.

I wouldn't mind having a new "for fun" scheduler in -mm, except there's
ongoing futzing with the current one to be sorted out.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: Maximum frequency of re-scheduling (minimum time quantum ) que stio n
       [not found] <320586863@toto.iv>
@ 2004-07-13  0:20 ` peterc
  0 siblings, 0 replies; 17+ messages in thread
From: peterc @ 2004-07-13  0:20 UTC (permalink / raw)
  To: Nick Piggin; +Cc: linux-kernel

F>>>>> "Nick" == Nick Piggin <nickpiggin@yahoo.com.au> writes:

Nick> However well tested your scheduler might be, it needs several
Nick> orders of magnitude more testing ;) Maybe the best we can hope
Nick> for is compile time selectable alternatives.  


Well, Solaris and other SVr4-based systems have run-time selectable
schedulers -- on a per-process basis.  (Of course, it only makes sense
to run schedulers that coexist nicely at the same time).  These
operating systems have the notion of a per-process scheduling class,
that is essentially some private data, and a vector of functions to be
called at particular times: when a thread is created, when something
goes to sleep or wakes up, at timeslice expiration, etc.  The
dispatcher then does queue management only, so there's a nice
separation of function.

By separating priority bands for each scheduler, you could have many
different schedulers cooperating simultaneously.  And if you don't
like the SCHED_OTHER scheduler you could replace it.  With care, this
could be done retrospectively to running processes.

We could perhaps think of doing this for the 2.7 timeframe.  I'm
cerrtainly interested, to allow easier experimentation with Gang and
other NUMA schedulers.

--
Dr Peter Chubb  http://www.gelato.unsw.edu.au  peterc AT gelato.unsw.edu.au
The technical we do immediately,  the political takes *forever*

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2004-07-13  0:21 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-07-07  9:48 Maximum frequency of re-scheduling (minimum time quantum) que stio n Povolotsky, Alexander
2004-07-07 15:52 ` Elladan
     [not found] <320586863@toto.iv>
2004-07-13  0:20 ` Maximum frequency of re-scheduling (minimum time quantum ) " peterc
  -- strict thread matches above, loose matches on Subject: below --
2004-07-08 13:01 Povolotsky, Alexander
2004-07-08 23:32 ` Peter Williams
2004-07-08 23:41   ` William Lee Irwin III
2004-07-09  1:46   ` Nick Piggin
2004-07-09  1:57     ` Andrew Morton
2004-07-09  4:18       ` Con Kolivas
2004-07-09  4:48         ` Andrew Morton
2004-07-09  3:04     ` Peter Williams
2004-07-07  7:59 Maximum frequency of re-scheduling (minimum time quantum) " Povolotsky, Alexander
2004-07-07  8:30 ` Bernd Eckenfels
2004-07-07  8:59 ` Elladan
2004-07-07 10:26   ` Con Kolivas
     [not found] <313680C9A886D511A06000204840E1CF08F42FD4@whq-msgusr-02.pit .comms.marconi.com>
2004-07-05 15:33 ` Mike Galbraith
2004-07-05 14:18 Povolotsky, Alexander
2004-07-05 23:26 ` Peter Williams

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox