* [TEST RESULT]massive_intr.c -- cfs/vanilla/sd-0.40
@ 2007-04-14 8:49 surya.prabhakar
2007-04-14 9:06 ` Willy Tarreau
2007-04-14 12:02 ` Ingo Molnar
0 siblings, 2 replies; 8+ messages in thread
From: surya.prabhakar @ 2007-04-14 8:49 UTC (permalink / raw)
To: mingo, kernel
Cc: linux-kernel, torvalds, akpm, npiggin, efault, arjan, tglx, wli
[-- Attachment #1: Type: text/plain, Size: 6392 bytes --]
Hi Ingo,
Did a test with massive_intr.c on a standard linux desktop.
for vanilla, con's Sd-0.40 and cfs.
Test bed is a p4 2.26GHz, 512mb ram
No load on the machine expect for the X server.
Results are stated below
Vanilla Kernel
--------------------
2.6.21-rc6-vanilla #8 SMP Sat Apr 14
[surya@bluegenie tests]$ ./massive_intr 10 10
002469 00000548
002463 00000499
002464 00000026
002465 00000028
002472 00000021
002467 00000018
002470 00000015
002466 00000017
002468 00000024
002471 00000013
top output
top - 13:49:53 up 4 min, 3 users, load average: 1.24, 0.71, 0.30
Tasks: 80 total, 10 running, 70 sleeping, 0 stopped, 0 zombie
Cpu(s): 13.3%us, 1.8%sy, 0.0%ni, 72.1%id, 12.7%wa, 0.1%hi, 0.0%si,
0.0%st
Mem: 506112k total, 440980k used, 65132k free, 31444k buffers
Swap: 522104k total, 0k used, 522104k free, 309560k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+
COMMAND
2469 surya 16 0 1624 204 132 S 33.8 0.0 0:00.30
massive_intr
2467 surya 22 0 1624 204 132 R 19.9 0.0 0:00.20
massive_intr
2466 surya 23 0 1624 204 132 R 15.9 0.0 0:00.20
massive_intr
2470 surya 24 0 1624 204 132 R 13.9 0.0 0:00.18
massive_intr
2468 surya 19 0 1624 204 132 R 8.0 0.0 0:00.19
massive_intr
2463 surya 19 0 1624 204 132 R 6.0 0.0 0:00.26
massive_intr
1 root 15 0 2032 640 548 S 0.0 0.1 0:00.72
bash
<snip>
2462 surya 25 0 1624 428 356 S 0.0 0.1 0:01.63
massive_intr
2464 surya 20 0 1624 204 132 R 0.0 0.0 0:00.21
massive_intr
2465 surya 21 0 1624 204 132 R 0.0 0.0 0:00.20
massive_intr
2471 surya 20 0 1624 204 132 R 0.0 0.0 0:00.10
massive_intr
2472 surya 21 0 1624 204 132 R 0.0 0.0 0:00.20
massive_intr
2473 surya 15 0 2160 900 724 R 0.0 0.2 0:00.00 top
SD-0.40
---------------
2.6.21-rc6-rsdl0.40 #6 SMP Sat Apr 14 13:08:31 IST 2007 i686 i686 i386
GNU/Linux
[root@bluegenie tests]# ./massive_intr 10 10
002504 00000125
002502 00000124
002500 00000123
002499 00000127
002501 00000125
002505 00000127
002506 00000127
002503 00000128
002498 00000124
002507 00000127
top - 13:42:50 up 5 min, 3 users, load average: 2.03, 1.31, 0.61
Tasks: 80 total, 12 running, 68 sleeping, 0 stopped, 0 zombie
Cpu(s): 17.8%us, 1.4%sy, 0.0%ni, 69.8%id, 11.0%wa, 0.1%hi, 0.0%si,
0.0%st
Mem: 506112k total, 441664k used, 64448k free, 31592k buffers
Swap: 522104k total, 0k used, 522104k free, 310152k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+
COMMAND
2500 root 29 0 1620 204 132 R 11.2 0.0 0:00.28
massive_intr
2501 root 29 0 1620 204 132 R 11.2 0.0 0:00.30
massive_intr
2502 root 29 0 1620 204 132 R 11.2 0.0 0:00.30
massive_intr
2504 root 29 0 1620 204 132 R 11.2 0.0 0:00.30
massive_intr
2498 root 29 0 1620 204 132 R 9.3 0.0 0:00.29
massive_intr
2499 root 29 0 1620 204 132 R 9.3 0.0 0:00.28
massive_intr
2503 root 27 0 1620 204 132 R 9.3 0.0 0:00.27
massive_intr
2505 root 27 0 1620 204 132 R 9.3 0.0 0:00.29
massive_intr
2506 root 27 0 1620 204 132 R 9.3 0.0 0:00.28
massive_intr
2507 root 27 0 1620 204 132 R 9.3 0.0 0:00.28
massive_intr
1 root 5 0 2036 640 548 S 0.0 0.1 0:00.72
init
<snip>
2497 root 1 0 1620 428 356 S 0.0 0.1 0:01.69
massive_intr
2508 surya 27 0 2164 904 724 R 0.0 0.2 0:00.00 top
CFS-kernel
----------------
2.6.21-rc6-cfs #7 SMP Sat Apr 14
pass1
-----
[surya@bluegenie tests]$ ./massive_intr 10 10
002435 00000120
002439 00000120
002441 00000120
002434 00000120
002436 00000120
002440 00000120
002432 00000120
002437 00000120
002433 00000120
002438 00000120
Felt it is too much fair, will try another pass ;)
[surya@bluegenie tests]$ ./massive_intr 10 10
002961 00000121
002965 00000120
002964 00000121
002959 00000120
002956 00000121
002963 00000121
002960 00000121
002962 00000121
002958 00000122
002957 00000122
top_output
---
top - 13:55:39 up 3 min, 3 users, load average: 0.24, 0.46, 0.22
Tasks: 80 total, 11 running, 69 sleeping, 0 stopped, 0 zombie
Cpu(s): 6.7%us, 2.2%sy, 0.0%ni, 75.3%id, 15.7%wa, 0.1%hi, 0.0%si,
0.0%st
Mem: 506108k total, 438100k used, 68008k free, 31144k buffers
Swap: 522104k total, 0k used, 522104k free, 306932k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+
COMMAND
2439 surya 20 0 1620 204 132 R 12.7 0.0 0:00.24
massive_intr
2432 surya 20 0 1620 204 132 R 10.9 0.0 0:00.24
massive_intr
2441 surya 20 0 1620 204 132 R 10.9 0.0 0:00.25
massive_intr
2433 surya 20 0 1620 204 132 R 9.1 0.0 0:00.20
massive_intr
2435 surya 20 0 1620 204 132 R 9.1 0.0 0:00.23
massive_intr
2434 surya 20 0 1620 204 132 R 7.2 0.0 0:00.23
massive_intr
2436 surya 20 0 1620 204 132 R 7.2 0.0 0:00.20
massive_intr
2437 surya 20 0 1620 204 132 R 7.2 0.0 0:00.20
massive_intr
2438 surya 20 0 1620 204 132 R 7.2 0.0 0:00.22
massive_intr
2440 surya 20 0 1620 204 132 R 7.2 0.0 0:00.21
massive_intr
1 root 20 0 2036 640 548 S 0.0 0.1 0:00.72
init
2 root RT 0 0 0 0 S 0.0 0.0 0:00.00
<snip>
2431 surya 20 0 1620 428 356 S 0.0 0.1 0:01.65
massive_intr
2442 surya 20 0 2160 900 724 R 0.0 0.2 0:00.00
top
Noticed considerable amount of load while running con's kernel.
cpu time is very very close to being fair in cfs. Will be trying out
ringtest in the next round.
thanks..
--
surya.
14/04/2007
[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 3185 bytes --]
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: [TEST RESULT]massive_intr.c -- cfs/vanilla/sd-0.40
2007-04-14 8:49 [TEST RESULT]massive_intr.c -- cfs/vanilla/sd-0.40 surya.prabhakar
@ 2007-04-14 9:06 ` Willy Tarreau
2007-04-14 12:02 ` Ingo Molnar
1 sibling, 0 replies; 8+ messages in thread
From: Willy Tarreau @ 2007-04-14 9:06 UTC (permalink / raw)
To: surya.prabhakar
Cc: mingo, kernel, linux-kernel, torvalds, akpm, npiggin, efault,
arjan, tglx, wli
Hi Surya,
On Sat, Apr 14, 2007 at 02:19:06PM +0530, surya.prabhakar@wipro.com wrote:
(...)
thanks for the interesting results.
> cpu time is very very close to being fair in cfs. Will be trying out
> ringtest in the next round.
It looks like Ingo can do very good things, he just needs very strong
competition to decide to go to the keyboard ;-)
Sorta like lazy sportives who decide to make an effort at the last
moment not to lose their first place...
Cheers,
Willy
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [TEST RESULT]massive_intr.c -- cfs/vanilla/sd-0.40
2007-04-14 8:49 [TEST RESULT]massive_intr.c -- cfs/vanilla/sd-0.40 surya.prabhakar
2007-04-14 9:06 ` Willy Tarreau
@ 2007-04-14 12:02 ` Ingo Molnar
2007-04-14 19:55 ` William Lee Irwin III
2007-04-16 8:26 ` Satoru Takeuchi
1 sibling, 2 replies; 8+ messages in thread
From: Ingo Molnar @ 2007-04-14 12:02 UTC (permalink / raw)
To: surya.prabhakar
Cc: kernel, linux-kernel, torvalds, akpm, npiggin, efault, arjan,
tglx, wli
* surya.prabhakar@wipro.com <surya.prabhakar@wipro.com> wrote:
> Hi Ingo,
> Did a test with massive_intr.c on a standard linux desktop.
> for vanilla, con's Sd-0.40 and cfs.
thanks!
> [surya@bluegenie tests]$ ./massive_intr 10 10
> 002435 00000120
> 002439 00000120
> 002441 00000120
> 002434 00000120
> 002436 00000120
> 002440 00000120
> 002432 00000120
> 002437 00000120
> 002433 00000120
> 002438 00000120
>
> Felt it is too much fair, will try another pass ;)
hehe :)
> [surya@bluegenie tests]$ ./massive_intr 10 10
> 002961 00000121
> 002965 00000120
> 002964 00000121
> 002959 00000120
> 002956 00000121
> 002963 00000121
> 002960 00000121
> 002962 00000121
> 002958 00000122
> 002957 00000122
btw., other schedulers might work better with some more test-time: i'd
suggest to use 60 seconds (./massive_intr 10 60) [or maybe more, using
more threads] to see long-term fairness effects.
> [...] Will be trying out ringtest in the next round.
cool. ringtest.c is intended to be used the following way: start it, it
will generate a 99% busy system (but it is using a ring of 100 tasks,
where each tasks runs for 100 msecs then sleeps for 1 msec, so every
task gets a turn every 10 seconds). If you add a pure CPU hog to the
system, for example an infinite shell loop:
while :; do :; done &
then a 'fair' scheduler would give roughly 50% of CPU time to the CPU
hog (and the ringtest.c tasks take up the other 50%).
Ingo
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [TEST RESULT]massive_intr.c -- cfs/vanilla/sd-0.40
2007-04-14 12:02 ` Ingo Molnar
@ 2007-04-14 19:55 ` William Lee Irwin III
2007-04-16 8:26 ` Satoru Takeuchi
1 sibling, 0 replies; 8+ messages in thread
From: William Lee Irwin III @ 2007-04-14 19:55 UTC (permalink / raw)
To: Ingo Molnar
Cc: surya.prabhakar, kernel, linux-kernel, torvalds, akpm, npiggin,
efault, arjan, tglx
On Sat, Apr 14, 2007 at 02:02:20PM +0200, Ingo Molnar wrote:
> cool. ringtest.c is intended to be used the following way: start it, it
> will generate a 99% busy system (but it is using a ring of 100 tasks,
> where each tasks runs for 100 msecs then sleeps for 1 msec, so every
> task gets a turn every 10 seconds). If you add a pure CPU hog to the
> system, for example an infinite shell loop:
> while :; do :; done &
> then a 'fair' scheduler would give roughly 50% of CPU time to the CPU
> hog (and the ringtest.c tasks take up the other 50%).
I've queued up modifying ringtest.c to automatically spawn a CPU hog
and then report aggregate CPU bandwidth of the ring and the bandwidth
of the CPU hog as work to do at some point. I've no guarantee I'll get
to it in a timely fashion, though.
-- wli
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [TEST RESULT]massive_intr.c -- cfs/vanilla/sd-0.40
2007-04-14 12:02 ` Ingo Molnar
2007-04-14 19:55 ` William Lee Irwin III
@ 2007-04-16 8:26 ` Satoru Takeuchi
2007-04-16 8:47 ` Ingo Molnar
1 sibling, 1 reply; 8+ messages in thread
From: Satoru Takeuchi @ 2007-04-16 8:26 UTC (permalink / raw)
To: Ingo Molnar
Cc: surya.prabhakar, kernel, linux-kernel, torvalds, akpm, npiggin,
efault, arjan, tglx, wli
At Sat, 14 Apr 2007 14:02:20 +0200,
Ingo Molnar wrote:
>
>
> * surya.prabhakar@wipro.com <surya.prabhakar@wipro.com> wrote:
>
> > Hi Ingo,
> > Did a test with massive_intr.c on a standard linux desktop.
> > for vanilla, con's Sd-0.40 and cfs.
>
> thanks!
>
> > [surya@bluegenie tests]$ ./massive_intr 10 10
> > 002435 00000120
> > 002439 00000120
> > 002441 00000120
> > 002434 00000120
> > 002436 00000120
> > 002440 00000120
> > 002432 00000120
> > 002437 00000120
> > 002433 00000120
> > 002438 00000120
> >
> > Felt it is too much fair, will try another pass ;)
>
> hehe :)
>
> > [surya@bluegenie tests]$ ./massive_intr 10 10
> > 002961 00000121
> > 002965 00000120
> > 002964 00000121
> > 002959 00000120
> > 002956 00000121
> > 002963 00000121
> > 002960 00000121
> > 002962 00000121
> > 002958 00000122
> > 002957 00000122
>
> btw., other schedulers might work better with some more test-time: i'd
> suggest to use 60 seconds (./massive_intr 10 60) [or maybe more, using
> more threads] to see long-term fairness effects.
I tested CFS with massive_intr. I did long term, many CPUs, and many
processes cases.
Test environment
================
- kernel: 2.6.21-rc6-CFS
- run time: 300 secs
- # of CPU: 1 or 4
- # of processes: 200 or 800
Result
======
+---------+-----------+-------+------+------+--------+
| # of | # of | avg | max | min | stdev |
| CPUs | processes | (*1) | (*2) | (*3) | (*4) |
+---------+-----------+-------+------+------+--------+
| 1(i386) | | 117.9 | 123 | 115 | 1.2 |
+---------| 200 +-------+------+------+--------+
| | | 750.2 | 767 | 735 | 10.6 |
| 4(ia64) +-----------+-------+------+------+--------+
| | 800(*5) | 187.3 | 189 | 186 | 0.8 |
+---------+-----------+-------+------+------+--------+
*1) average number of loops among all processes
*2) maximum number of loops among all processes
*3) minimum number of loops among all processes
*4) standard deviation
*5) Its # of processes per CPU is equal to first test case.
Pretty good! CFS seems to be fair in any situation.
Satoru
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [TEST RESULT]massive_intr.c -- cfs/vanilla/sd-0.40
2007-04-16 8:26 ` Satoru Takeuchi
@ 2007-04-16 8:47 ` Ingo Molnar
2007-04-16 10:13 ` Satoru Takeuchi
0 siblings, 1 reply; 8+ messages in thread
From: Ingo Molnar @ 2007-04-16 8:47 UTC (permalink / raw)
To: Satoru Takeuchi
Cc: surya.prabhakar, kernel, linux-kernel, torvalds, akpm, npiggin,
efault, arjan, tglx, wli
* Satoru Takeuchi <takeuchi_satoru@jp.fujitsu.com> wrote:
> > btw., other schedulers might work better with some more test-time:
> > i'd suggest to use 60 seconds (./massive_intr 10 60) [or maybe more,
> > using more threads] to see long-term fairness effects.
>
> I tested CFS with massive_intr. I did long term, many CPUs, and many
> processes cases.
>
> Test environment
> ================
>
> - kernel: 2.6.21-rc6-CFS
> - run time: 300 secs
> - # of CPU: 1 or 4
> - # of processes: 200 or 800
>
> Result
> ======
>
> +---------+-----------+-------+------+------+--------+
> | # of | # of | avg | max | min | stdev |
> | CPUs | processes | (*1) | (*2) | (*3) | (*4) |
> +---------+-----------+-------+------+------+--------+
> | 1(i386) | | 117.9 | 123 | 115 | 1.2 |
> +---------| 200 +-------+------+------+--------+
> | | | 750.2 | 767 | 735 | 10.6 |
> | 4(ia64) +-----------+-------+------+------+--------+
> | | 800(*5) | 187.3 | 189 | 186 | 0.8 |
> +---------+-----------+-------+------+------+--------+
>
> *1) average number of loops among all processes
> *2) maximum number of loops among all processes
> *3) minimum number of loops among all processes
> *4) standard deviation
> *5) Its # of processes per CPU is equal to first test case.
>
> Pretty good! CFS seems to be fair in any situation.
thanks for testing this! Indeed the min-max values and standard
deviation look all pretty healthy. (They in fact seem to be better than
the other patch of mine against upstream that you tested, correct?)
[ And there's also another nice little detail in your feedback: CFS
actually builds, boots and works fine on ia64 too ;-) ]
Ingo
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [TEST RESULT]massive_intr.c -- cfs/vanilla/sd-0.40
2007-04-16 8:47 ` Ingo Molnar
@ 2007-04-16 10:13 ` Satoru Takeuchi
2007-04-16 10:22 ` Ingo Molnar
0 siblings, 1 reply; 8+ messages in thread
From: Satoru Takeuchi @ 2007-04-16 10:13 UTC (permalink / raw)
To: Ingo Molnar
Cc: Satoru Takeuchi, surya.prabhakar, kernel, linux-kernel, torvalds,
akpm, npiggin, efault, arjan, tglx, wli, Greg KH
At Mon, 16 Apr 2007 10:47:25 +0200,
Ingo Molnar wrote:
>
>
> * Satoru Takeuchi <takeuchi_satoru@jp.fujitsu.com> wrote:
>
> > > btw., other schedulers might work better with some more test-time:
> > > i'd suggest to use 60 seconds (./massive_intr 10 60) [or maybe more,
> > > using more threads] to see long-term fairness effects.
> >
> > I tested CFS with massive_intr. I did long term, many CPUs, and many
> > processes cases.
> >
> > Test environment
> > ================
> >
> > - kernel: 2.6.21-rc6-CFS
> > - run time: 300 secs
> > - # of CPU: 1 or 4
> > - # of processes: 200 or 800
> >
> > Result
> > ======
> >
> > +---------+-----------+-------+------+------+--------+
> > | # of | # of | avg | max | min | stdev |
> > | CPUs | processes | (*1) | (*2) | (*3) | (*4) |
> > +---------+-----------+-------+------+------+--------+
> > | 1(i386) | | 117.9 | 123 | 115 | 1.2 |
> > +---------| 200 +-------+------+------+--------+
> > | | | 750.2 | 767 | 735 | 10.6 |
> > | 4(ia64) +-----------+-------+------+------+--------+
> > | | 800(*5) | 187.3 | 189 | 186 | 0.8 |
> > +---------+-----------+-------+------+------+--------+
> >
> > *1) average number of loops among all processes
> > *2) maximum number of loops among all processes
> > *3) minimum number of loops among all processes
> > *4) standard deviation
> > *5) Its # of processes per CPU is equal to first test case.
> >
> > Pretty good! CFS seems to be fair in any situation.
>
> thanks for testing this! Indeed the min-max values and standard
> deviation look all pretty healthy. (They in fact seem to be better than
> the other patch of mine against upstream that you tested, correct?)
Is it?
patch:
http://lkml.org/lkml/2007/3/27/225
test result:
http://lkml.org/lkml/2007/3/31/41
If so, yes. That patch has some dispersion.
>
> [ And there's also another nice little detail in your feedback: CFS
> actually builds, boots and works fine on ia64 too ;-) ]
You are welcome. I can use larger machine by chance, and also tested
there just now.
Test environment
================
- kernel: 2.6.21-rc6-CFS
- run time: 300 secs
- # of CPU: 12
- # of processes: 200 or 2400
Result
===================================
+----------+-----------+-------+------+------+--------+
| # of | # of | avg | max | min | stdev |
| CPUs | processes | | | | |
+----------+-----------+-------+------+------+--------+
| | 200 | 2250 | 2348 | 2204 | 64 |
| 12(ia64) +-----------+-------+------+------+--------+
| | 2400 | 187.5 | 197 | 176 | 4.3 |
+----------+-----------+-------+------+------+--------+
Looks like good too.
BTW, I've a question. Actually this problem is fixed on CFS and DS. However
they are mostly written from scratch and doeesn't suitable for stable version,
for example 2.6.20.X. Can your other patch be compromise for stable version?
Although that patch is not perfect, but I think it's preferable to leave it
alone.
Satoru
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [TEST RESULT]massive_intr.c -- cfs/vanilla/sd-0.40
2007-04-16 10:13 ` Satoru Takeuchi
@ 2007-04-16 10:22 ` Ingo Molnar
0 siblings, 0 replies; 8+ messages in thread
From: Ingo Molnar @ 2007-04-16 10:22 UTC (permalink / raw)
To: Satoru Takeuchi
Cc: surya.prabhakar, kernel, linux-kernel, torvalds, akpm, npiggin,
efault, arjan, tglx, wli, Greg KH
* Satoru Takeuchi <takeuchi_satoru@jp.fujitsu.com> wrote:
> You are welcome. I can use larger machine by chance, and also tested
> there just now.
>
> Test environment
> ================
>
> - kernel: 2.6.21-rc6-CFS
> - run time: 300 secs
> - # of CPU: 12
> - # of processes: 200 or 2400
>
> Result
> ===================================
>
> +----------+-----------+-------+------+------+--------+
> | # of | # of | avg | max | min | stdev |
> | CPUs | processes | | | | |
> +----------+-----------+-------+------+------+--------+
> | | 200 | 2250 | 2348 | 2204 | 64 |
> | 12(ia64) +-----------+-------+------+------+--------+
> | | 2400 | 187.5 | 197 | 176 | 4.3 |
> +----------+-----------+-------+------+------+--------+
>
> Looks like good too.
yeah. The spread between min and max is 11%, the spread between stddev
and avg is 2.2%, which is quite OK for so many tasks.
> BTW, I've a question. Actually this problem is fixed on CFS and DS.
> However they are mostly written from scratch and doeesn't suitable for
> stable version, for example 2.6.20.X. Can your other patch be
> compromise for stable version? Although that patch is not perfect, but
> I think it's preferable to leave it alone.
i'm afraid that small patch is not suitable for a general purpose Linux
release (it hits interactivity way too much) - that's what this
years-long struggle was about. But you could apply it to a special
server-centric kernel.
Ingo
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2007-04-16 10:22 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-04-14 8:49 [TEST RESULT]massive_intr.c -- cfs/vanilla/sd-0.40 surya.prabhakar
2007-04-14 9:06 ` Willy Tarreau
2007-04-14 12:02 ` Ingo Molnar
2007-04-14 19:55 ` William Lee Irwin III
2007-04-16 8:26 ` Satoru Takeuchi
2007-04-16 8:47 ` Ingo Molnar
2007-04-16 10:13 ` Satoru Takeuchi
2007-04-16 10:22 ` Ingo Molnar
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox