public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [patch] O(1) scheduler updates, -J2
@ 2002-01-18 18:18 Ingo Molnar
  2002-01-19 22:19 ` Matthew Sackman
  0 siblings, 1 reply; 6+ messages in thread
From: Ingo Molnar @ 2002-01-18 18:18 UTC (permalink / raw)
  To: linux-kernel
  Cc: Linus Torvalds, Dieter [iso-8859-15] Nützel,
	Martin Knoblauch, Davide Libenzi, Ed Tomlinson, Rene Rebe


the -J2 O(1) scheduler patch is available:

    http://redhat.com/~mingo/O(1)-scheduler/sched-O1-2.5.3-pre1-J2.patch
    http://redhat.com/~mingo/O(1)-scheduler/sched-O1-2.4.17-J2.patch

Changes since -J0:

 - Ed Tomlinson: optimize wake_up_forked_process() further.

 - the -J0 patch had a broken version of the task migration code - it did
   not include all the necessery changes for task migration to work at
   all. This broke 3 or more CPU boxes. The setting of the task-migration
   IPI vector was missing. -J2test-booted on an 8-way system just fine.

 - micro-optimize wakeup: high-frequency wakeups do not call the average
   calculation code.

 - finishing touches on interactiveness:

  1) default-niceness processes can only reach 90% of the full priority
     range. This protects normal processes from nice +10 CPU hogs, and
     protects nice -20 interactive tasks (audio playback, emergency
     shells, etc.) from normal processes.

  2) updates on priority inheritance of forked children: child processes
     get 80% of the parent's priority. [it was 66% in -J0.] The difference
     is visible during high compilation load, xterms under Gnome/KDE start
     up much faster, because such startups create two new processes, thus
     the second process gets the penalty twice. With 80%, the penalty is
     just enough for the shell to stay out of the 'CPU-bound hell' of
     compilation jobs.

  3) the 0...39 'user priority' range is now split up into three areas:

        A) 'unconditionally interactive tasks' in the lower 25% range.
        B) 'CPU-bound tasks' in the high 25% range.
        C) 'conditionally interactive tasks' in the middle 50% range.

     tasks in category B) are interactive if they are 10% below their
     default priority. (ie. if they sleep more than they do run.)

the new interactivenes changes made my systems even smoother than they
were under 2.5.3-pre1. None of the interactiveness logic changes add
overhead to the fast path. (the changes are either compile-time, or are in
some slow path.) Every of the above three changes was measured to improve
interactivity in compilation workloads and other workloads.

Comments, reports, suggestions welcome. Especially the testing of
interactiveness would be great, comparing the -J2 patch against other
kernels. (stock or patched kernels, 2.4 or 2.5 kernels, older O(1)
scheduler patches, etc.)

	Ingo


^ permalink raw reply	[flat|nested] 6+ messages in thread
* Re: [patch] O(1) scheduler updates, -J2
@ 2002-01-20  0:19 James C. Owens
  0 siblings, 0 replies; 6+ messages in thread
From: James C. Owens @ 2002-01-20  0:19 UTC (permalink / raw)
  To: mingo, linux-kernel

Ingo,

Tested -J2 on SuSE 6.4 Linux 2.4.17 kernel on my "big" Linux box - Tyan
Tiger MP, 2x Athlon MP 1600+ with 1.25 GB RAM and Mylex AcceleRAID 170 
RAID 5. Default parameters for timeslice, etc in -J2 sched.h were used.

Excellent interactivity maintained with make -j bzImage issued - load
average was about 100. Mozilla 0.9.7 was very usable while this was 
going on.

Kernel compile times:

with no other significant processes (CPU wise) running

                          O(1) J2          default

make bzImage            3 min 45 sec     3 min 36 sec	
make -j2 bzImage        1 min 59 sec     1 min 57 sec
make -j16 bzImage       2 min 02 sec     2 min 01 sec
make -j bzImage         2 min 13 sec     2 min 09 sec
      peak -j load        106              210


with 2 setiathome processes running at nice 19

                          O(1) J2          default

make bzImage            4 min 10 sec     3 min 42 sec
make -j2 bzImage        2 min 56 sec     2 min 17 sec
make -j16 bzImage       2 min 22 sec     2 min 03 sec
make -j bzImage         2 min 24 sec     2 min 11 sec
      peak -j load        100              206

A few things come to my attention with these numbers. It seems as if 
heavily niced processes are still getting too much CPU time. Also, there 
    may be work still needed on the parent-child stuff - Just watching 
the graphical output of xosview and KDE System Guard during make -j2 
with the seti processes running showed that CPU 0 was only being used 
about 50% of the time on average, with sometimes it being totally used 
by the niced SETI process, and other times totally by the compile. The 
fraction of usage for CPU 0 for the non-niced compile processes would 
bounce wildly between 0 and 100% (with the SETI always making up the 
difference). CPU 1 would always remain fully loaded with the compile. 
Dividing the two corresponding times for the make -j2 case 
((1+59/60)/(2+56/60)) gives a CPU usage of 0.68 of available for 
compilation.

Usage this 0.68 as a proportion ratio, this means the SETI process(es) 
were getting 0.32 or 32% of the total processor time on the machine. 
Assigning 0.5 to CPU 1 (which was fully loaded), and subtracting from 
0.68 gives 0.18. Multiplying 0.18 by two to get the fraction on CPU 0 
gives 0.36 or 36%, which means CPU 1 was spending 36% of its time on the 
compile and 64% on SETI. This is in-line with the visual behavior seen 
watching xosview and KDE System Guard.

Do you think this is correct behavior, or is more tuning or adjustment 
to the balancing algorithm in order? Maybe the same issue that is 
causing this is behind getting a peak load of only about 100 doing make 
-j with O(1) versus about 200 with the normal scheduler.

The interactivity is absolutely stunning. The GUI remains extremely 
usable even during the make -j's which brings interactivity to its knees 
with the normal scheduler.

Absolutely *no* stability issues whatsover. Rock-solid.

Hope this provides some useful info.

Jim Owens


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2002-01-21 18:47 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2002-01-18 18:18 [patch] O(1) scheduler updates, -J2 Ingo Molnar
2002-01-19 22:19 ` Matthew Sackman
2002-01-20 23:01   ` Matthew Sackman
2002-01-21  0:02     ` Martin Mačok
2002-01-21 18:46       ` Matthew Sackman
  -- strict thread matches above, loose matches on Subject: below --
2002-01-20  0:19 James C. Owens

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox