public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: sat <takeuchi_satoru@jp.fujitsu.com>
To: lkml <linux-kernel@vger.kernel.org>, Con Kolivas <kernel@kolivas.org>
Cc: Ingo Molnar <mingo@elte.hu>,
	Peter Zijlstra <peterz@infradead.org>,
	Raistlin <raistlin@linux.it>
Subject: massive_intr on CFS, BFS, and EDF
Date: Fri, 25 Sep 2009 08:53:46 +0900	[thread overview]
Message-ID: <4ABC068A.6070704@jp.fujitsu.com> (raw)

Hi,

I tried massive_intr, a process scheduler's fairness and throughtput testing
program for massive interactive processes, on vanilla 2.6.31, 2.6.31-bfs211
(with BFS) and 2.6.31-edf(latest linus tree + EDF patch).

CFS and BFS look good. CFS has better fairness, and BFS has better throughput.
EDF looks unfair and is unstable. I tested 3 times and the tendency was the
same.

  NOTE:

  - BFS patch is applied to 2.6.31, but EDF patch is applied to latest linus
    tree, so they can't compare strictly. This report is just FYI.

  - EDF kernel shows some strange behavior as follows:

     * aptitude(debian package management tool) stacks on the way
     * oocalc doesn't start

  - I dont't subscribe lkml now. So if you reply, pleas CC me.

Thanks,
Satoru

===============================================================================

[ test envinronment ]

A laptop machine with x86_64 dual core CPU

[ test program ]

 # CFS and BFS:
 $ massive_intr 30 30

 # EDF
 $ schedtool -E -d 100000 -b 25000 -e ./massive_intr 30 30

It means running 30 interactive processes simultaneously for 30 secs.
Full description of this program is on the comments of source code.

URL:
http://people.redhat.com/mingo/cfs-scheduler/tools/massive_intr.c

[ test result ]

+---------------+-----------+---------+---------+---------+-----------+
| kernel        | scheduler | avg(*1) | min(*2) | max(*3) | stdev(*4) |
+---------------+-----------+---------+---------+---------+-----------+
| 2.6.31        |       CFS |     246 |     240 |     247 |       1.3 |
| 2.6.31-bfs211 |       BFS |     254 |     241 |     268 |       7.1 |
+---------------+-----------+---------+---------+---------+-----------+
| 2.6.31-edf(*5)|       EDF |     440 |     154 |    1405 |     444.8 |
+---------------+-----------+---------+---------+---------+-----------+

*1) average number of loops among all processes
*2) minimum number of loops among all processes
*3) maximum number of loops among all processes
*4) standard deviation
*5) EDF kernel hanged up on the way. The data is only among 7 threads.

High average means good throughput, and low stdev means good fairness.

[raw data]

# vanilla 2.6.31 (CFS)
sat@debian:~/practice/bfs$ uname -r
2.6.31
sat@debian:~/practice/bfs$ ./massive_intr 30 30
003873	00000246
003893	00000246
003898	00000240
003876	00000245
003888	00000245
003870	00000245
003882	00000247
003890	00000245
003872	00000245
003880	00000246
003895	00000246
003892	00000246
003878	00000246
003874	00000246
003896	00000246
003897	00000246
003884	00000246
003891	00000246
003894	00000246
003871	00000246
003886	00000247
003877	00000246
003879	00000246
003889	00000246
003881	00000246
003899	00000244
003887	00000247
003875	00000247
003885	00000247
003883	00000247

# 2.6.31-bfs211
sat@debian:~/practice/bfs$ uname -r
2.6.31-bfs211
sat@debian:~/practice/bfs$ ./massive_intr 30 30
004143	00000248
004127	00000241
004154	00000252
004145	00000255
004137	00000251
004148	00000263
004135	00000261
004153	00000247
004132	00000250
004146	00000248
004140	00000251
004130	00000245
004138	00000267
004136	00000249
004139	00000262
004141	00000255
004147	00000251
004131	00000253
004150	00000254
004152	00000254
004129	00000253
004142	00000242
004151	00000268
004128	00000263
004134	00000260
004144	00000252
004133	00000254
004149	00000265
004126	00000252
004125	00000246

# 2.6.31-edf (latest linus tree + edf patch)
sat@debian:~/practice/bfs$ uname -r
2.6.31-edf
sat@debian:~/practice/bfs$ schedtool-edf/schedtool -E -d 100000 -b
25000 -e ./massive_intr 30 30
Dumping mode: 0xa
Dumping affinity: 0xffffffff
We have 3 args to do
Dump arg 0: ./massive_intr
Dump arg 1: 30
Dump arg 2: 30
003915	00000541
003914	00001405
003916	00000310
003924	00000177
003923	00000154
003917	00000280
003918	00000211
# <kernel hunged up here>




             reply	other threads:[~2009-09-24 23:54 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-09-24 23:53 sat [this message]
2009-09-25  3:27 ` massive_intr on CFS, BFS, and EDF Satoru Takeuchi
2009-09-25  5:35   ` Raistlin
2009-09-25  5:35 ` Raistlin
2009-09-25 16:07 ` Michael Trimarchi
2009-09-25 21:31   ` Chris Friesen
2009-09-25 22:37     ` Peter Zijlstra
2009-09-25 22:46       ` Chris Friesen
2009-09-26  7:22         ` Raistlin
2009-09-26  6:56       ` Raistlin
2009-09-26  6:50     ` Raistlin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4ABC068A.6070704@jp.fujitsu.com \
    --to=takeuchi_satoru@jp.fujitsu.com \
    --cc=kernel@kolivas.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=peterz@infradead.org \
    --cc=raistlin@linux.it \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox