public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* No 100 HZ timer !
@ 2001-08-01 17:22 george anzinger
  2001-08-01 19:34 ` Chris Friesen
  0 siblings, 1 reply; 17+ messages in thread
From: george anzinger @ 2001-08-01 17:22 UTC (permalink / raw)
  To: linux-kernel@vger.kernel.org

I have just posted a patch on sourceforge:
 http://sourceforge.net/projects/high-res-timers

to the 2.4.7 kernel with both ticked and tick less options, switch able
at any time via a /proc interface.  The system is instrumented with
Andrew Mortons time pegs with a couple of enhancements so you can easily
see your clock/ timer overhead (thanks Andrew).

Please take a look at this system and let me know if a tick less system
is worth further effort.  

The testing I have done seems to indicate a lower overhead on a lightly
loaded system, about the same overhead with some load, and much more
overhead with a heavy load.  To me this seems like the wrong thing to
do.  We would like as nearly a flat overhead to load curve as we can get
and the ticked system seems to be much better in this regard.  Still
there may be applications where this works.

comments?  RESULTS?

George

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: No 100 HZ timer !
  2001-08-01 17:22 No 100 HZ timer ! george anzinger
@ 2001-08-01 19:34 ` Chris Friesen
  2001-08-01 19:49   ` Richard B. Johnson
  2001-08-01 21:20   ` george anzinger
  0 siblings, 2 replies; 17+ messages in thread
From: Chris Friesen @ 2001-08-01 19:34 UTC (permalink / raw)
  To: george anzinger, linux-kernel

george anzinger wrote:

> The testing I have done seems to indicate a lower overhead on a lightly
> loaded system, about the same overhead with some load, and much more
> overhead with a heavy load.  To me this seems like the wrong thing to

What about something that tries to get the best of both worlds?  How about a
tickless system that has a max frequency for how often it will schedule?  This
would give the tickless advantage for big iron running many lightly loaded
virtual instances, but have some kind of cap on the overhead under heavy load.

Does this sound feasable?

-- 
Chris Friesen                    | MailStop: 043/33/F10  
Nortel Networks                  | work: (613) 765-0557
3500 Carling Avenue              | fax:  (613) 765-2986
Nepean, ON K2H 8E9 Canada        | email: cfriesen@nortelnetworks.com

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: No 100 HZ timer !
  2001-08-01 19:34 ` Chris Friesen
@ 2001-08-01 19:49   ` Richard B. Johnson
  2001-08-01 20:08     ` Mark Salisbury
  2001-08-01 20:33     ` george anzinger
  2001-08-01 21:20   ` george anzinger
  1 sibling, 2 replies; 17+ messages in thread
From: Richard B. Johnson @ 2001-08-01 19:49 UTC (permalink / raw)
  To: Chris Friesen; +Cc: george anzinger, linux-kernel


> george anzinger wrote:
> 
> > The testing I have done seems to indicate a lower overhead on a lightly
> > loaded system, about the same overhead with some load, and much more
> > overhead with a heavy load.  To me this seems like the wrong thing to
> 

Doesn't the "tick-less" system presume that somebody, somewhere, will
be sleeping sometime during the 1/HZ interval so that the scheduler
gets control?

If everybody's doing:

	for(;;)
          number_crunch();

And no I/O is pending, how does the jiffy count get bumped?

I think the "tick-less" system relies upon a side-effect of
interactive use that can't be relied upon for design criteria.

Cheers,
Dick Johnson

Penguin : Linux version 2.4.1 on an i686 machine (799.53 BogoMips).

    I was going to compile a list of innovations that could be
    attributed to Microsoft. Once I realized that Ctrl-Alt-Del
    was handled in the BIOS, I found that there aren't any.



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: No 100 HZ timer !
  2001-08-01 19:49   ` Richard B. Johnson
@ 2001-08-01 20:08     ` Mark Salisbury
  2001-08-01 20:33     ` george anzinger
  1 sibling, 0 replies; 17+ messages in thread
From: Mark Salisbury @ 2001-08-01 20:08 UTC (permalink / raw)
  To: root, Richard B. Johnson, Chris Friesen; +Cc: george anzinger, linux-kernel

you are misunderstanding what is meant by tickless.

tickless means that instead of having a clock interruptevery 10 ms whether
there are any timed events pending or not, a tickeless system only generates an
interrupt when there is a pending timed event.

timed events are such things as:

	process quanta expiration, 
	POSIX timers
	alarm()
etc.

jiffies, as such, don't need to be updated as a tickless system relies on a
decrementer and a (usually 64-bit) wall clock. a jiffie then is just the
wallclock count. (or more likely, wall clock >> a few bits)

the tickless system is intended to take advantage of some of the more modern
features of newer CPU's (such as useful counters and decrementers)

the fixed interval, ticked system is a reasonable lowest common denominator
solution.  but when hardware is capable of supporting a more flexible, modern
system, it should be an available option.



On Wed, 01 Aug 2001, Richard B. Johnson wrote:
> > george anzinger wrote:
> > 
> > > The testing I have done seems to indicate a lower overhead on a lightly
> > > loaded system, about the same overhead with some load, and much more
> > > overhead with a heavy load.  To me this seems like the wrong thing to
> > 
> 
> Doesn't the "tick-less" system presume that somebody, somewhere, will
> be sleeping sometime during the 1/HZ interval so that the scheduler
> gets control?
> 
> If everybody's doing:
> 
> 	for(;;)
>           number_crunch();
> 
> And no I/O is pending, how does the jiffy count get bumped?
> 
> I think the "tick-less" system relies upon a side-effect of
> interactive use that can't be relied upon for design criteria.
> 
> Cheers,
> Dick Johnson
> 
> Penguin : Linux version 2.4.1 on an i686 machine (799.53 BogoMips).
> 
>     I was going to compile a list of innovations that could be
>     attributed to Microsoft. Once I realized that Ctrl-Alt-Del
>     was handled in the BIOS, I found that there aren't any.
> 
> 
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
-- 
/*------------------------------------------------**
**   Mark Salisbury | Mercury Computer Systems    **
**   mbs@mc.com     | System OS - Kernel Team     **
**------------------------------------------------**
**  Thanks to all who sponsored me for the        **
**  Multiple Sclerosis Great Mass Getaway!!       **
**  Robert, Michele and I raised $10,000 and the  **
**  raised over $1,000,000 ride as a whole.  The  **
**  ride was great, and we didn't get rained on!  **
**  Thanks again to all of you who made this      **
**  possible!!                                    **
**------------------------------------------------*/


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: No 100 HZ timer !
  2001-08-01 19:49   ` Richard B. Johnson
  2001-08-01 20:08     ` Mark Salisbury
@ 2001-08-01 20:33     ` george anzinger
  1 sibling, 0 replies; 17+ messages in thread
From: george anzinger @ 2001-08-01 20:33 UTC (permalink / raw)
  To: root; +Cc: Chris Friesen, linux-kernel

"Richard B. Johnson" wrote:
> 
> > george anzinger wrote:
> >
> > > The testing I have done seems to indicate a lower overhead on a lightly
> > > loaded system, about the same overhead with some load, and much more
> > > overhead with a heavy load.  To me this seems like the wrong thing to
> >
> 
> Doesn't the "tick-less" system presume that somebody, somewhere, will
> be sleeping sometime during the 1/HZ interval so that the scheduler
> gets control?
> 
> If everybody's doing:
> 
>         for(;;)
>           number_crunch();
> 
> And no I/O is pending, how does the jiffy count get bumped?

Who cares if it gets bumped?  In the tick less system the jiffy counter
is a function.  Thus, if you need it, it will be current, more current
than in the ticked system because it is calculated on the spot and does
not rely on an interrupt to "bump" it.
> 
> I think the "tick-less" system relies upon a side-effect of
> interactive use that can't be relied upon for design criteria.
> 
Look at the code.  You will find it here:
http://sourceforge.net/projects/high-res-timers

George

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: No 100 HZ timer !
  2001-08-01 19:34 ` Chris Friesen
  2001-08-01 19:49   ` Richard B. Johnson
@ 2001-08-01 21:20   ` george anzinger
  2001-08-02  4:28     ` Rik van Riel
  1 sibling, 1 reply; 17+ messages in thread
From: george anzinger @ 2001-08-01 21:20 UTC (permalink / raw)
  To: Chris Friesen; +Cc: linux-kernel

Chris Friesen wrote:
> 
> george anzinger wrote:
> 
> > The testing I have done seems to indicate a lower overhead on a lightly
> > loaded system, about the same overhead with some load, and much more
> > overhead with a heavy load.  To me this seems like the wrong thing to
> 
> What about something that tries to get the best of both worlds?  How about a
> tickless system that has a max frequency for how often it will schedule?  This

How would you do this?  Larger time slices?  But _most_ context switches
are not related to end of slice.   Refuse to switch?  This just idles
the cpu.

> would give the tickless advantage for big iron running many lightly loaded
> virtual instances, but have some kind of cap on the overhead under heavy load.
> 
> Does this sound feasable?
> 
I don't think so.  The problem is that the test to see if the system
should use one or the other way of doing things would, it self, eat into
the overhead.

Note that we are talking about 0.12% over head for a ticked system.  Is
it really worth it for what amounts to less than 0.05% (if that much)?

George

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: No 100 HZ timer !
  2001-08-01 21:20   ` george anzinger
@ 2001-08-02  4:28     ` Rik van Riel
  2001-08-02  6:03       ` george anzinger
  0 siblings, 1 reply; 17+ messages in thread
From: Rik van Riel @ 2001-08-02  4:28 UTC (permalink / raw)
  To: george anzinger; +Cc: Chris Friesen, linux-kernel

On Wed, 1 Aug 2001, george anzinger wrote:
> Chris Friesen wrote:
> > george anzinger wrote:
> >
> > > The testing I have done seems to indicate a lower overhead on a lightly
> > > loaded system, about the same overhead with some load, and much more
> > > overhead with a heavy load.  To me this seems like the wrong thing to
> >
> > What about something that tries to get the best of both worlds?  How about a
> > tickless system that has a max frequency for how often it will schedule?  This
>
> How would you do this?  Larger time slices?  But _most_ context
> switches are not related to end of slice.  Refuse to switch?
> This just idles the cpu.

Never set the next hit of the timer to (now + MIN_INTERVAL).

This way we'll get to run the current task until the timer
hits or until the current task voluntarily gives up the CPU.

We can check for already-expired timers in schedule().

regards,

Rik
--
Executive summary of a recent Microsoft press release:
   "we are concerned about the GNU General Public License (GPL)"


		http://www.surriel.com/
http://www.conectiva.com/	http://distro.conectiva.com/


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: No 100 HZ timer !
  2001-08-02  4:28     ` Rik van Riel
@ 2001-08-02  6:03       ` george anzinger
  2001-08-02 14:39         ` Oliver Xymoron
  0 siblings, 1 reply; 17+ messages in thread
From: george anzinger @ 2001-08-02  6:03 UTC (permalink / raw)
  To: Rik van Riel; +Cc: Chris Friesen, linux-kernel

Rik van Riel wrote:
> 
> On Wed, 1 Aug 2001, george anzinger wrote:
> > Chris Friesen wrote:
> > > george anzinger wrote:
> > >
> > > > The testing I have done seems to indicate a lower overhead on a lightly
> > > > loaded system, about the same overhead with some load, and much more
> > > > overhead with a heavy load.  To me this seems like the wrong thing to
> > >
> > > What about something that tries to get the best of both worlds?  How about a
> > > tickless system that has a max frequency for how often it will schedule?  This
> >
> > How would you do this?  Larger time slices?  But _most_ context
> > switches are not related to end of slice.  Refuse to switch?
> > This just idles the cpu.
> 
> Never set the next hit of the timer to (now + MIN_INTERVAL).
> 
> This way we'll get to run the current task until the timer
> hits or until the current task voluntarily gives up the CPU.

The overhead under load is _not_ the timer interrupt, it is the context
switch that needs to set up a "slice" timer, most of which never
expire.  During a kernel compile on an 800MHZ PIII I am seeing ~300
context switches per second (i.e. about every 3 ms.)   Clearly the
switching is being caused by task blocking.  With the ticked system the
"slice" timer overhead is constant.
> 
> We can check for already-expired timers in schedule().

Delaying "alarm" timers is bad form.  
Especially for someone who is working on high-res-timers :)

George

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: No 100 HZ timer !
  2001-08-02  6:03       ` george anzinger
@ 2001-08-02 14:39         ` Oliver Xymoron
  2001-08-02 16:36           ` george anzinger
  0 siblings, 1 reply; 17+ messages in thread
From: Oliver Xymoron @ 2001-08-02 14:39 UTC (permalink / raw)
  To: george anzinger; +Cc: Rik van Riel, Chris Friesen, linux-kernel

On Wed, 1 Aug 2001, george anzinger wrote:

> > Never set the next hit of the timer to (now + MIN_INTERVAL).
>
> The overhead under load is _not_ the timer interrupt, it is the context
> switch that needs to set up a "slice" timer, most of which never
> expire.  During a kernel compile on an 800MHZ PIII I am seeing ~300
> context switches per second (i.e. about every 3 ms.)   Clearly the
> switching is being caused by task blocking.  With the ticked system the
> "slice" timer overhead is constant.

Can you instead just not set up a reschedule timer if the timer at the
head of the list is less than MIN_INTERVAL?

if(slice_timer_needed)
{
	if(time_until(next_timer)>TASK_SLICE)
	{
		next_timer=jiffies()+TASK_SLICE;
		add_timer(TASK_SLICE);
	}
	slice_timer_needed=0;
}

--
 "Love the dolphins," she advised him. "Write by W.A.S.T.E.."


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: No 100 HZ timer !
  2001-08-02 14:39         ` Oliver Xymoron
@ 2001-08-02 16:36           ` george anzinger
  2001-08-02 17:05             ` Oliver Xymoron
  2001-08-02 17:26             ` No 100 HZ timer ! John Alvord
  0 siblings, 2 replies; 17+ messages in thread
From: george anzinger @ 2001-08-02 16:36 UTC (permalink / raw)
  To: Oliver Xymoron; +Cc: Rik van Riel, Chris Friesen, linux-kernel

Oliver Xymoron wrote:
> 
> On Wed, 1 Aug 2001, george anzinger wrote:
> 
> > > Never set the next hit of the timer to (now + MIN_INTERVAL).
> >
> > The overhead under load is _not_ the timer interrupt, it is the context
> > switch that needs to set up a "slice" timer, most of which never
> > expire.  During a kernel compile on an 800MHZ PIII I am seeing ~300
> > context switches per second (i.e. about every 3 ms.)   Clearly the
> > switching is being caused by task blocking.  With the ticked system the
> > "slice" timer overhead is constant.
> 
> Can you instead just not set up a reschedule timer if the timer at the
> head of the list is less than MIN_INTERVAL?
> 
> if(slice_timer_needed)
> {
>         if(time_until(next_timer)>TASK_SLICE)
>         {
>                 next_timer=jiffies()+TASK_SLICE;
>                 add_timer(TASK_SLICE);
>         }
>         slice_timer_needed=0;
> }
> 
Ok, but then what?  The head timer expires.  Now what?  Since we are not
clocking the slice we don't know when it started.  Seems to me we are
just shifting the overhead to a different place and adding additional
tests and code to do it.  The add_timer() code is fast.  The timing
tests (800MHZ PIII) show the whole setup taking an average of about 1.16
micro seconds.  the problem is that this happens, under kernel compile,
~300 times per second, so the numbers add up.  Note that the ticked
system timer overhead (interrupts, time keeping, timers, the works) is
about 0.12% of the available cpu.  Under heavy load this raises to about
0.24% according to my measurments.  The tick less system overhead under
the same kernel compile load is about 0.12%.  No load is about 0.012%,
but heavy load can take it to 12% or more, most of this comming from the
accounting overhead in schedule().  Is it worth it?

George

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: No 100 HZ timer !
  2001-08-02 16:36           ` george anzinger
@ 2001-08-02 17:05             ` Oliver Xymoron
  2001-08-02 17:46               ` george anzinger
  2001-08-02 17:26             ` No 100 HZ timer ! John Alvord
  1 sibling, 1 reply; 17+ messages in thread
From: Oliver Xymoron @ 2001-08-02 17:05 UTC (permalink / raw)
  To: george anzinger; +Cc: Rik van Riel, Chris Friesen, linux-kernel

On Thu, 2 Aug 2001, george anzinger wrote:

> Ok, but then what?  The head timer expires.  Now what?  Since we are not
> clocking the slice we don't know when it started.  Seems to me we are
> just shifting the overhead to a different place and adding additional
> tests and code to do it. The add_timer() code is fast.  The timing
> tests (800MHZ PIII) show the whole setup taking an average of about 1.16
> micro seconds.  the problem is that this happens, under kernel compile,
> ~300 times per second, so the numbers add up.

As you said, most of those 'time to reschedule' timers never expire - we
hit a rescheduling point first, yes? In the old system, we essentially had
one 'time to reschedule' timer pending at any given time, I'm just trying
to approximate that.

> Note that the ticked
> system timer overhead (interrupts, time keeping, timers, the works) is
> about 0.12% of the available cpu.  Under heavy load this raises to about
> 0.24% according to my measurments.  The tick less system overhead under
> the same kernel compile load is about 0.12%.  No load is about 0.012%,
> but heavy load can take it to 12% or more, most of this comming from the
> accounting overhead in schedule().  Is it worth it?

Does the higher timer granularity cause overall throughput to improve, by
any chance?

--
 "Love the dolphins," she advised him. "Write by W.A.S.T.E.."


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: No 100 HZ timer !
  2001-08-02 16:36           ` george anzinger
  2001-08-02 17:05             ` Oliver Xymoron
@ 2001-08-02 17:26             ` John Alvord
  1 sibling, 0 replies; 17+ messages in thread
From: John Alvord @ 2001-08-02 17:26 UTC (permalink / raw)
  To: linux-kernel

On Thu, 02 Aug 2001 09:36:07 -0700, george anzinger
<george@mvista.com> wrote:
 ...
>  The timing
>tests (800MHZ PIII) show the whole setup taking an average of about 1.16
>micro seconds.  the problem is that this happens, under kernel compile,
>~300 times per second, so the numbers add up.  Note that the ticked
>system timer overhead (interrupts, time keeping, timers, the works) is
>about 0.12% of the available cpu.  Under heavy load this raises to about
>0.24% according to my measurments.  The tick less system overhead under
>the same kernel compile load is about 0.12%.  No load is about 0.012%,
>but heavy load can take it to 12% or more, most of this comming from the
>accounting overhead in schedule().  Is it worth it?

I thought the claimed advantage was on certain S/390 configurations
(running 100s of Linux/390 images under VM) where the cost is
multiplied by the number of images. A measurement on another platform
may be interesting but not conclusive.

john alvord

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: No 100 HZ timer !
  2001-08-02 17:05             ` Oliver Xymoron
@ 2001-08-02 17:46               ` george anzinger
  2001-08-02 18:41                 ` Oliver Xymoron
  0 siblings, 1 reply; 17+ messages in thread
From: george anzinger @ 2001-08-02 17:46 UTC (permalink / raw)
  To: Oliver Xymoron; +Cc: Rik van Riel, Chris Friesen, linux-kernel

Oliver Xymoron wrote:
> 
> On Thu, 2 Aug 2001, george anzinger wrote:
> 
> > Ok, but then what?  The head timer expires.  Now what?  Since we are not
> > clocking the slice we don't know when it started.  Seems to me we are
> > just shifting the overhead to a different place and adding additional
> > tests and code to do it. The add_timer() code is fast.  The timing
> > tests (800MHZ PIII) show the whole setup taking an average of about 1.16
> > micro seconds.  the problem is that this happens, under kernel compile,
> > ~300 times per second, so the numbers add up.
> 
> As you said, most of those 'time to reschedule' timers never expire - we
> hit a rescheduling point first, yes? In the old system, we essentially had
> one 'time to reschedule' timer pending at any given time, I'm just trying
> to approximate that.
> 
> > Note that the ticked
> > system timer overhead (interrupts, time keeping, timers, the works) is
> > about 0.12% of the available cpu.  Under heavy load this raises to about
> > 0.24% according to my measurments.  The tick less system overhead under
> > the same kernel compile load is about 0.12%.  No load is about 0.012%,
> > but heavy load can take it to 12% or more, most of this comming from the
> > accounting overhead in schedule().  Is it worth it?
> 
> Does the higher timer granularity cause overall throughput to improve, by
> any chance?
> 
Good question.  I have not run any tests for this.  You might want to do
so.  To do these tests you would want to build the system with the tick
less timers only and with the instrumentation turned off.  I would like
to hear the results.

In the mean time, here is a best guess.  First, due to hardware
limitations, the longest time you can program the timer for is ~50ms. 
This means you are reducing the load by a factor of 5.  Now the load
(i.e. timer overhead) is ~0.12%, so it would go to ~0.025%.  This means
that you should have about 0.1% more available for thru put.  Even if we
take 10 times this to cover the cache disruptions that no longer occur,
I would guess a thru put improvement of no more than 1%.  Still,
measurements are better that guesses...

George

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: No 100 HZ timer !
  2001-08-02 17:46               ` george anzinger
@ 2001-08-02 18:41                 ` Oliver Xymoron
  2001-08-02 21:18                   ` george anzinger
  0 siblings, 1 reply; 17+ messages in thread
From: Oliver Xymoron @ 2001-08-02 18:41 UTC (permalink / raw)
  To: george anzinger; +Cc: Rik van Riel, Chris Friesen, linux-kernel

On Thu, 2 Aug 2001, george anzinger wrote:

> Oliver Xymoron wrote:
> >
> > Does the higher timer granularity cause overall throughput to improve, by
> > any chance?
> >
> Good question.  I have not run any tests for this.  You might want to do
> so.  To do these tests you would want to build the system with the tick
> less timers only and with the instrumentation turned off.  I would like
> to hear the results.
>
> In the mean time, here is a best guess.  First, due to hardware
> limitations, the longest time you can program the timer for is ~50ms.
> This means you are reducing the load by a factor of 5.  Now the load
> (i.e. timer overhead) is ~0.12%, so it would go to ~0.025%.  This means
> that you should have about 0.1% more available for thru put.  Even if we
> take 10 times this to cover the cache disruptions that no longer occur,
> I would guess a thru put improvement of no more than 1%.  Still,
> measurements are better that guesses...

That's not what I'm getting at at all. Simply raising HZ is known to
improve throughput on many workloads, even with more reschedules: the
system is able to more finely adapt to changes in available disk and
memory bandwidth.

BTW, there are some arguments that tickless is worth doing even on old
PIC-only systems:

http://groups.google.com/groups?q=oliver+xymoron+timer&hl=en&group=mlist.linux.kernel&safe=off&rnum=2&selm=linux.kernel.Pine.LNX.4.30.0104111337170.32245-100000%40waste.org

And I found this while I was looking too:

http://groups.google.com/groups?q=oliver+xymoron+timer&hl=en&group=mlist.linux.kernel&safe=off&rnum=3&selm=linux.kernel.Pine.LNX.4.10.10010241534110.2957-100000%40waste.org

..but no one thought it was interesting at the time.

--
 "Love the dolphins," she advised him. "Write by W.A.S.T.E.."


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: No 100 HZ timer !
  2001-08-02 18:41                 ` Oliver Xymoron
@ 2001-08-02 21:18                   ` george anzinger
  2001-08-02 22:09                     ` Oliver Xymoron
  0 siblings, 1 reply; 17+ messages in thread
From: george anzinger @ 2001-08-02 21:18 UTC (permalink / raw)
  To: Oliver Xymoron; +Cc: Rik van Riel, Chris Friesen, linux-kernel

Oliver Xymoron wrote:
> 
> On Thu, 2 Aug 2001, george anzinger wrote:
> 
> > Oliver Xymoron wrote:
> > >
> > > Does the higher timer granularity cause overall throughput to improve, by
> > > any chance?
> > >
> > Good question.  I have not run any tests for this.  You might want to do
> > so.  To do these tests you would want to build the system with the tick
> > less timers only and with the instrumentation turned off.  I would like
> > to hear the results.
> >
> > In the mean time, here is a best guess.  First, due to hardware
> > limitations, the longest time you can program the timer for is ~50ms.
> > This means you are reducing the load by a factor of 5.  Now the load
> > (i.e. timer overhead) is ~0.12%, so it would go to ~0.025%.  This means
> > that you should have about 0.1% more available for thru put.  Even if we
> > take 10 times this to cover the cache disruptions that no longer occur,
> > I would guess a thru put improvement of no more than 1%.  Still,
> > measurements are better that guesses...
> 
> That's not what I'm getting at at all. Simply raising HZ is known to
> improve throughput on many workloads, even with more reschedules: the
> system is able to more finely adapt to changes in available disk and
> memory bandwidth.
> 
> BTW, there are some arguments that tickless is worth doing even on old
> PIC-only systems:
> 
> http://groups.google.com/groups?q=oliver+xymoron+timer&hl=en&group=mlist.linux.kernel&safe=off&rnum=2&selm=linux.kernel.Pine.LNX.4.30.0104111337170.32245-100000%40waste.org
> 
> And I found this while I was looking too:
> 
> http://groups.google.com/groups?q=oliver+xymoron+timer&hl=en&group=mlist.linux.kernel&safe=off&rnum=3&selm=linux.kernel.Pine.LNX.4.10.10010241534110.2957-100000%40waste.org
> 
> ..but no one thought it was interesting at the time.
> 
I guess I am confused.  How is it that raising HZ improves throughput? 
And was that before or after the changes in the time slice routines that
now scale with HZ and before were fixed?  (That happened somewhere
around 2.2.14 or 2.2.16 or so.)  

I am writing a POSIX high-resolution-timers package and hope to have
timers that have resolution at least to 1 micro second, however, this is
independent of ticked or tick less.  

The PIT may be slow to program, but it not that slow.  My timing shows
it to be less than 0.62 micro seconds and this includes the micro second
to PIT count conversion.  At the same time (on an 800 MHz PIII) I see
interrupt overhead (time to execute an int xx to a do nothing interrupt
handler) to be ~6.5 micro seconds.  This turns out, in the tick less
case with a PIT reprogramming, to be more than half of the total average
timer interrupt time. (I.e. the timer interrupt handler + the time list
processing took about 6.1 micro seconds.  To this we add the interrupt
overhead to get 12.6 micro seconds total interrupt time.)

And yes, it is possible to do tick less with out the "tsc".  The KURT
package has the code to do it, along with the note that they don't think
many such machines will ever use their code...  My experiment/ patch
does not do this as it is just an experiment to see if tick less is
worth doing at all.  What I need to see to be convinced is a realistic
load that shows a measurable improvement with the tick less system.  The
system is there (http://sourceforge.net/projects/high-res-timers)
waiting for the load.

George

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: No 100 HZ timer !
  2001-08-02 21:18                   ` george anzinger
@ 2001-08-02 22:09                     ` Oliver Xymoron
  2001-08-02 22:47                       ` No 100 HZ timer ! & the tq_timer george anzinger
  0 siblings, 1 reply; 17+ messages in thread
From: Oliver Xymoron @ 2001-08-02 22:09 UTC (permalink / raw)
  To: george anzinger; +Cc: Rik van Riel, Chris Friesen, linux-kernel

On Thu, 2 Aug 2001, george anzinger wrote:

> I guess I am confused.  How is it that raising HZ improves throughput?
> And was that before or after the changes in the time slice routines that
> now scale with HZ and before were fixed?  (That happened somewhere
> around 2.2.14 or 2.2.16 or so.)

My guess is that processes that are woken up for whatever reason get to
run sooner, reducing latency, and thereby increasing throughput when not
compute-bound. Presumably this was with shorter time slices.

--
 "Love the dolphins," she advised him. "Write by W.A.S.T.E.."


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: No 100 HZ timer ! & the tq_timer
  2001-08-02 22:09                     ` Oliver Xymoron
@ 2001-08-02 22:47                       ` george anzinger
  0 siblings, 0 replies; 17+ messages in thread
From: george anzinger @ 2001-08-02 22:47 UTC (permalink / raw)
  To: Oliver Xymoron; +Cc: Rik van Riel, Chris Friesen, linux-kernel

Oliver Xymoron wrote:
> 
> On Thu, 2 Aug 2001, george anzinger wrote:
> 
> > I guess I am confused.  How is it that raising HZ improves throughput?
> > And was that before or after the changes in the time slice routines that
> > now scale with HZ and before were fixed?  (That happened somewhere
> > around 2.2.14 or 2.2.16 or so.)
> 
> My guess is that processes that are woken up for whatever reason get to
> run sooner, reducing latency, and thereby increasing throughput when not
> compute-bound. Presumably this was with shorter time slices.
> 
The only timer dependency I can see on a wake up is related to the
"tq_timer".  This is a tasklet queue that is checked each tick. 
Tasklets that are in it are then run on interrupt exit.  IMHO this whole
list should go away.  If a deferred action is actually needed, a timer
should be used to kick it off.  It should not be hooked to the tick the
way it is.  Better yet, why is it needed at all?

Comments anyone?

George

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2001-08-02 22:48 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2001-08-01 17:22 No 100 HZ timer ! george anzinger
2001-08-01 19:34 ` Chris Friesen
2001-08-01 19:49   ` Richard B. Johnson
2001-08-01 20:08     ` Mark Salisbury
2001-08-01 20:33     ` george anzinger
2001-08-01 21:20   ` george anzinger
2001-08-02  4:28     ` Rik van Riel
2001-08-02  6:03       ` george anzinger
2001-08-02 14:39         ` Oliver Xymoron
2001-08-02 16:36           ` george anzinger
2001-08-02 17:05             ` Oliver Xymoron
2001-08-02 17:46               ` george anzinger
2001-08-02 18:41                 ` Oliver Xymoron
2001-08-02 21:18                   ` george anzinger
2001-08-02 22:09                     ` Oliver Xymoron
2001-08-02 22:47                       ` No 100 HZ timer ! & the tq_timer george anzinger
2001-08-02 17:26             ` No 100 HZ timer ! John Alvord

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox