public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* fair group scheduler not so fair?
@ 2008-05-21 23:59 Chris Friesen
  2008-05-22  6:56 ` Peter Zijlstra
                   ` (2 more replies)
  0 siblings, 3 replies; 26+ messages in thread
From: Chris Friesen @ 2008-05-21 23:59 UTC (permalink / raw)
  To: linux-kernel, vatsa, mingo, a.p.zijlstra, pj

I just downloaded the current git head and started playing with the fair 
group scheduler.  (This is on a dual cpu Mac G5.)

I created two groups, "a" and "b".  Each of them was left with the 
default share of 1024.

I created three cpu hogs by doing "cat /dev/zero > /dev/null".  One hog 
(pid 2435) was put into group "a", while the other two were put into 
group "b".

After giving them time to settle down, "top" showed the following:

2438 cfriesen  20   0  3800  392  336 R 99.5  0.0   4:02.82 cat 

2435 cfriesen  20   0  3800  392  336 R 65.9  0.0   3:30.94 cat 

2437 cfriesen  20   0  3800  392  336 R 34.3  0.0   3:14.89 cat 



Where pid 2435 should have gotten a whole cpu worth of time, it actually 
only got 66% of a cpu. Is this expected behaviour?



I then redid the test with two hogs in one group and three hogs in the 
other group.  Unfortunately, the cpu shares were not equally distributed 
within each group.  Using a 10-sec interval in "top", I got the following:


2522 cfriesen  20   0  3800  392  336 R 52.2  0.0   1:33.38 cat 

2523 cfriesen  20   0  3800  392  336 R 48.9  0.0   1:37.85 cat 

2524 cfriesen  20   0  3800  392  336 R 37.0  0.0   1:23.22 cat 

2525 cfriesen  20   0  3800  392  336 R 32.6  0.0   1:22.62 cat 

2559 cfriesen  20   0  3800  392  336 R 28.7  0.0   0:24.30 cat 


Do we expect to see upwards of 9% relative unfairness between processes 
within a class?

I tried messing with the tuneables in /proc/sys/kernel 
(sched_latency_ns, sched_migration_cost, sched_min_granularity_ns) but 
was unable to significantly improve these results.

Any pointers would be appreciated.

Thanks,

Chris

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2008-06-02 20:03 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-05-21 23:59 fair group scheduler not so fair? Chris Friesen
2008-05-22  6:56 ` Peter Zijlstra
2008-05-22 20:02   ` Chris Friesen
2008-05-22 20:07     ` Peter Zijlstra
2008-05-22 20:18       ` Li, Tong N
2008-05-22 21:13         ` Peter Zijlstra
2008-05-23  0:17           ` Chris Friesen
2008-05-23  7:44             ` Srivatsa Vaddagiri
2008-05-23  9:42         ` Srivatsa Vaddagiri
2008-05-23  9:39           ` Peter Zijlstra
2008-05-23 10:19             ` Srivatsa Vaddagiri
2008-05-23 10:16               ` Peter Zijlstra
2008-05-27 17:15 ` Srivatsa Vaddagiri
2008-05-27 18:13   ` Chris Friesen
2008-05-28 16:33     ` Srivatsa Vaddagiri
2008-05-28 18:35       ` Chris Friesen
2008-05-28 18:47         ` Dhaval Giani
2008-05-29  2:50         ` Srivatsa Vaddagiri
2008-05-29 16:46         ` Srivatsa Vaddagiri
2008-05-29 16:47           ` Srivatsa Vaddagiri
2008-05-29 21:30           ` Chris Friesen
2008-05-30  6:43             ` Dhaval Giani
2008-05-30 10:21               ` Srivatsa Vaddagiri
2008-05-30 11:36             ` Srivatsa Vaddagiri
2008-06-02 20:03               ` Chris Friesen
2008-05-27 17:28 ` Srivatsa Vaddagiri

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox