From: Willy Tarreau <w@1wt.eu>
To: Ingo Molnar <mingo@elte.hu>, Con Kolivas <kernel@kolivas.org>
Cc: linux-kernel@vger.kernel.org,
Linus Torvalds <torvalds@linux-foundation.org>,
Andrew Morton <akpm@linux-foundation.org>,
Nick Piggin <npiggin@suse.de>, Mike Galbraith <efault@gmx.de>,
Arjan van de Ven <arjan@infradead.org>,
Peter Williams <pwil3058@bigpond.net.au>,
Thomas Gleixner <tglx@linutronix.de>,
caglar@pardus.org.tr, Gene Heskett <gene.heskett@gmail.com>
Subject: [REPORT] cfs-v4 vs sd-0.44
Date: Sat, 21 Apr 2007 14:12:35 +0200 [thread overview]
Message-ID: <20070421121235.GA2044@1wt.eu> (raw)
In-Reply-To: <20070420140457.GA14017@elte.hu>
Hi Ingo, Hi Con,
I promised to perform some tests on your code. I'm short in time right now,
but I observed behaviours that should be commented on.
1) machine : dual athlon 1533 MHz, 1G RAM, kernel 2.6.21-rc7 + either scheduler
Test: ./ocbench -R 250000 -S 750000 -x 8 -y 8
ocbench: http://linux.1wt.eu/sched/
2) SD-0.44
Feels good, but becomes jerky at moderately high loads. I've started
64 ocbench with a 250 ms busy loop and 750 ms sleep time. The system
always responds correctly but under X, mouse jumps quite a bit and
typing in xterm or even text console feels slightly jerky. The CPU is
not completely used, and the load varies a lot (see below). However,
the load is shared equally between all 64 ocbench, and they do not
deviate even after 4000 iterations. X uses less than 1% CPU during
those tests.
Here's the vmstat output :
willy@pcw:~$ vmstat 1
procs memory swap io system cpu
r b w swpd free buff cache si so bi bo in cs us sy id
0 0 0 0 919856 6648 57788 0 0 22 2 4 148 31 49 20
0 0 0 0 919856 6648 57788 0 0 0 0 2 285 32 50 19
28 0 0 0 919836 6648 57788 0 0 0 0 0 331 24 40 36
64 0 0 0 919836 6648 57788 0 0 0 0 1 618 23 40 37
65 0 0 0 919836 6648 57788 0 0 0 0 0 571 21 36 43
35 0 0 0 919836 6648 57788 0 0 0 0 3 382 32 50 18
2 0 0 0 919836 6648 57788 0 0 0 0 0 308 37 61 2
8 0 0 0 919836 6648 57788 0 0 0 0 1 533 36 65 0
32 0 0 0 919768 6648 57788 0 0 0 0 93 706 33 62 5
62 0 0 0 919712 6648 57788 0 0 0 0 65 617 32 54 13
63 0 0 0 919712 6648 57788 0 0 0 0 1 569 28 48 23
40 0 0 0 919712 6648 57788 0 0 0 0 0 427 26 50 24
4 0 0 0 919712 6648 57788 0 0 0 0 1 382 29 48 23
4 0 0 0 919712 6648 57788 0 0 0 0 0 383 34 65 0
14 0 0 0 919712 6648 57788 0 0 0 0 1 769 39 61 0
40 0 0 0 919712 6648 57788 0 0 0 0 0 384 37 52 11
54 0 0 0 919712 6648 57788 0 0 0 0 1 715 31 60 8
58 0 2 0 919712 6648 57788 0 0 0 0 1 611 34 65 0
41 0 0 0 919712 6648 57788 0 0 0 0 19 395 28 45 27
0 0 0 0 919712 6648 57788 0 0 0 0 31 421 23 32 45
0 0 0 0 919712 6648 57788 0 0 0 0 31 328 34 44 22
29 0 0 0 919712 6648 57788 0 0 0 0 34 369 32 43 25
65 0 0 0 919712 6648 57788 0 0 0 0 31 410 24 35 40
47 0 1 0 919712 6648 57788 0 0 0 0 42 538 25 39 35
3) CFS-v4
Feels even better, mouse movements are very smooth even under high load.
I noticed that X gets reniced to -19 with this scheduler. I've not looked
at the code yet but this looked suspicious to me. I've reniced it to 0 and
it did not change any behaviour. Still very good. The 64 ocbench share
equal CPU time and show exact same progress after 2000 iterations. The CPU
load is more smoothly spread according to vmstat, and there's no idle (see
below). BUT I now think it was wrong to let new processes start with no
timeslice at all, because it can take tens of seconds to start a new process
when only 64 ocbench are there. Simply starting "killall ocbench" takes about
10 seconds. On a smaller machine (VIA C3-533), it took me more than one
minute to do "su -", even from console, so that's not X. BTW, X uses less
than 1% CPU during those tests.
willy@pcw:~$ vmstat 1
procs memory swap io system cpu
r b w swpd free buff cache si so bi bo in cs us sy id
12 0 2 0 922120 6532 57540 0 0 299 29 31 386 17 27 57
12 0 2 0 922096 6532 57556 0 0 0 0 1 776 37 63 0
14 0 2 0 922096 6532 57556 0 0 0 0 1 782 35 65 0
13 0 1 0 922096 6532 57556 0 0 0 0 0 782 38 62 0
14 0 1 0 922096 6532 57556 0 0 0 0 1 782 36 64 0
13 0 1 0 922096 6532 57556 0 0 0 0 2 785 38 62 0
13 0 1 0 922096 6532 57556 0 0 0 0 1 774 35 65 0
14 0 1 0 922096 6532 57556 0 0 0 0 0 784 36 64 0
13 0 1 0 922096 6532 57556 0 0 0 0 1 767 37 63 0
13 0 1 0 922096 6532 57556 0 0 0 0 1 785 41 59 0
14 0 1 0 922096 6532 57556 0 0 0 0 0 779 38 62 0
19 0 1 0 922096 6532 57556 0 0 0 0 1 816 38 62 0
22 0 1 0 922096 6532 57556 0 0 0 0 0 817 35 65 0
19 0 1 0 922096 6532 57556 0 0 0 0 1 817 39 61 0
21 0 1 0 922096 6532 57556 0 0 0 0 0 849 36 64 0
20 0 0 0 922096 6532 57556 0 0 0 0 1 793 36 64 0
21 0 0 0 922096 6532 57556 0 0 0 0 0 815 37 63 0
19 0 0 0 922096 6532 57556 0 0 0 0 1 824 35 65 0
21 0 0 0 922096 6532 57556 0 0 0 0 0 817 35 65 0
26 0 0 0 922096 6532 57556 0 0 0 0 1 824 38 62 0
26 0 0 0 922096 6532 57556 0 0 0 0 1 817 35 65 0
26 0 0 0 922096 6532 57556 0 0 0 0 0 811 37 63 0
26 0 0 0 922096 6532 57556 0 0 0 0 1 804 34 66 0
16 0 0 0 922096 6532 57556 0 0 0 0 39 850 35 65 0
18 0 0 0 922096 6532 57556 0 0 0 0 1 801 39 61 0
4) first impressions
I think that CFS is based on a more promising concept but is less mature
and is dangerous right now with certain workloads. SD shows some strange
behaviours like not using all CPU available and a little jerkyness, but is
more robust and may be the less risky solution for a first step towards
a better scheduler in mainline, but it may also probably be the last O(1)
scheduler, which may be replaced sometime later when CFS (or any other one)
shows at the same time the smoothness of CFS and the robustness of SD.
I'm sorry not to spend more time on them right now, I hope that other people
will do.
Regards,
Willy
next prev parent reply other threads:[~2007-04-21 12:13 UTC|newest]
Thread overview: 147+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-04-20 14:04 [patch] CFS scheduler, v4 Ingo Molnar
2007-04-20 21:37 ` Gene Heskett
2007-04-21 20:47 ` S.Çağlar Onur
2007-04-22 1:22 ` Gene Heskett
2007-04-20 21:39 ` mdew .
2007-04-21 6:47 ` Ingo Molnar
2007-04-21 7:55 ` [patch] CFS scheduler, v4, for v2.6.20.7 Ingo Molnar
2007-04-21 12:12 ` Willy Tarreau [this message]
2007-04-21 12:40 ` [REPORT] cfs-v4 vs sd-0.44 Con Kolivas
2007-04-21 13:02 ` Willy Tarreau
2007-04-21 15:46 ` Ingo Molnar
2007-04-21 16:18 ` Willy Tarreau
2007-04-21 16:34 ` Linus Torvalds
2007-04-21 16:42 ` William Lee Irwin III
2007-04-21 18:55 ` Kyle Moffett
2007-04-21 19:49 ` Ulrich Drepper
2007-04-21 23:17 ` William Lee Irwin III
2007-04-21 23:35 ` Linus Torvalds
2007-04-22 1:46 ` Ulrich Drepper
2007-04-22 7:02 ` William Lee Irwin III
2007-04-22 7:17 ` Ulrich Drepper
2007-04-22 8:48 ` William Lee Irwin III
2007-04-22 16:16 ` Ulrich Drepper
2007-04-23 0:07 ` Rusty Russell
2007-04-21 16:53 ` Willy Tarreau
2007-04-21 16:53 ` Ingo Molnar
2007-04-21 16:57 ` Willy Tarreau
2007-04-21 18:09 ` Ulrich Drepper
2007-04-21 17:03 ` Geert Bosch
2007-04-21 15:55 ` Con Kolivas
2007-04-21 16:00 ` Ingo Molnar
2007-04-21 16:12 ` Willy Tarreau
2007-04-21 16:39 ` William Lee Irwin III
2007-04-21 17:15 ` Jan Engelhardt
2007-04-21 19:00 ` Ingo Molnar
2007-04-22 13:18 ` Mark Lord
2007-04-22 13:27 ` Ingo Molnar
2007-04-22 13:30 ` Mark Lord
2007-04-25 8:16 ` Pavel Machek
2007-04-25 8:22 ` Ingo Molnar
2007-04-25 10:19 ` Alan Cox
2007-04-21 22:54 ` Denis Vlasenko
2007-04-22 0:08 ` Con Kolivas
2007-04-22 4:58 ` Mike Galbraith
2007-04-21 23:59 ` Con Kolivas
2007-04-22 13:04 ` Juliusz Chroboczek
2007-04-22 23:24 ` Linus Torvalds
2007-04-23 1:34 ` Nick Piggin
2007-04-23 15:56 ` Linus Torvalds
2007-04-23 19:11 ` Ingo Molnar
2007-04-23 19:52 ` Linus Torvalds
2007-04-23 20:33 ` Ingo Molnar
2007-04-23 20:44 ` Ingo Molnar
2007-04-23 21:03 ` Ingo Molnar
2007-04-23 21:53 ` Guillaume Chazarain
2007-04-24 7:04 ` Rogan Dawes
2007-04-24 7:31 ` Ingo Molnar
2007-04-24 8:25 ` Rogan Dawes
2007-04-24 15:03 ` Chris Friesen
2007-04-24 15:07 ` Rogan Dawes
2007-04-24 15:15 ` Chris Friesen
2007-04-24 23:55 ` Peter Williams
2007-04-25 9:29 ` Ingo Molnar
2007-04-23 22:48 ` Jeremy Fitzhardinge
2007-04-24 0:59 ` Li, Tong N
2007-04-24 1:57 ` Bill Huey
2007-04-24 18:01 ` Li, Tong N
2007-04-24 21:27 ` William Lee Irwin III
2007-04-24 22:18 ` Bernd Eckenfels
2007-04-25 1:22 ` Li, Tong N
2007-04-25 6:05 ` William Lee Irwin III
2007-04-25 9:44 ` Ingo Molnar
2007-04-25 11:58 ` William Lee Irwin III
2007-04-25 20:13 ` Willy Tarreau
2007-04-26 17:57 ` Li, Tong N
2007-04-26 19:18 ` Willy Tarreau
2007-04-28 15:12 ` Bernd Eckenfels
2007-04-26 23:26 ` William Lee Irwin III
2007-04-24 3:46 ` Peter Williams
2007-04-24 4:52 ` Arjan van de Ven
2007-04-24 6:21 ` Peter Williams
2007-04-24 6:36 ` Ingo Molnar
2007-04-24 7:00 ` Gene Heskett
2007-04-24 7:08 ` Ingo Molnar
2007-04-24 6:45 ` David Lang
2007-04-24 7:24 ` Ingo Molnar
2007-04-24 14:38 ` Gene Heskett
2007-04-24 17:44 ` Willy Tarreau
2007-04-25 0:30 ` Gene Heskett
2007-04-25 0:32 ` Gene Heskett
2007-04-24 7:12 ` Gene Heskett
2007-04-24 7:14 ` Ingo Molnar
2007-04-24 14:36 ` Gene Heskett
2007-04-24 7:25 ` Ingo Molnar
2007-04-24 14:39 ` Gene Heskett
2007-04-24 14:42 ` Gene Heskett
2007-04-24 7:33 ` Ingo Molnar
2007-04-26 0:51 ` SD renice recommendation was: " Con Kolivas
2007-04-24 15:08 ` Ray Lee
2007-04-25 9:32 ` Ingo Molnar
2007-04-23 20:05 ` Willy Tarreau
2007-04-24 21:05 ` 'Scheduler Economy' prototype patch for CFS Ingo Molnar
2007-04-23 2:42 ` [report] renicing X, cfs-v5 vs sd-0.46 Ingo Molnar
2007-04-23 15:09 ` Linus Torvalds
2007-04-23 17:19 ` Gene Heskett
2007-04-23 17:19 ` Gene Heskett
2007-04-23 19:48 ` Ingo Molnar
2007-04-23 20:56 ` Michael K. Edwards
2007-04-22 13:23 ` [REPORT] cfs-v4 vs sd-0.44 Mark Lord
2007-04-21 18:17 ` Gene Heskett
2007-04-22 1:26 ` Con Kolivas
2007-04-22 2:07 ` Gene Heskett
2007-04-22 8:07 ` William Lee Irwin III
2007-04-22 11:11 ` Gene Heskett
2007-04-22 1:51 ` Con Kolivas
2007-04-21 20:35 ` [patch] CFS scheduler, v4 S.Çağlar Onur
2007-04-22 8:30 ` Michael Gerdau
2007-04-23 22:47 ` Ingo Molnar
2007-04-23 1:12 ` [patch] CFS scheduler, -v5 Ingo Molnar
2007-04-23 1:25 ` Nick Piggin
2007-04-23 2:39 ` Gene Heskett
2007-04-23 3:08 ` Ingo Molnar
2007-04-23 2:55 ` Ingo Molnar
2007-04-23 3:22 ` Nick Piggin
2007-04-23 3:43 ` Ingo Molnar
2007-04-23 4:06 ` Nick Piggin
2007-04-23 7:10 ` Ingo Molnar
2007-04-23 7:25 ` Nick Piggin
2007-04-23 7:35 ` Ingo Molnar
2007-04-23 9:25 ` Ingo Molnar
2007-04-23 3:19 ` [patch] CFS scheduler, -v5 (build problem - make headers_check fails) Zach Carter
2007-04-23 10:03 ` Ingo Molnar
2007-04-23 5:16 ` [patch] CFS scheduler, -v5 Markus Trippelsdorf
2007-04-23 5:27 ` Markus Trippelsdorf
2007-04-23 6:21 ` Ingo Molnar
2007-04-25 11:43 ` Srivatsa Vaddagiri
2007-04-25 12:51 ` Ingo Molnar
2007-04-23 12:20 ` Guillaume Chazarain
2007-04-23 12:36 ` Ingo Molnar
2007-04-24 16:54 ` Christian Hesse
2007-04-25 9:25 ` Ingo Molnar
2007-04-25 10:51 ` Christian Hesse
2007-04-25 10:56 ` Ingo Molnar
2007-04-23 9:28 ` crash with CFS v4 and qemu/kvm (was: [patch] CFS scheduler, v4) Christian Hesse
2007-04-23 10:18 ` Ingo Molnar
2007-04-24 10:54 ` Christian Hesse
-- strict thread matches above, loose matches on Subject: below --
2007-04-22 4:38 [REPORT] cfs-v4 vs sd-0.44 Al Boldi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20070421121235.GA2044@1wt.eu \
--to=w@1wt.eu \
--cc=akpm@linux-foundation.org \
--cc=arjan@infradead.org \
--cc=caglar@pardus.org.tr \
--cc=efault@gmx.de \
--cc=gene.heskett@gmail.com \
--cc=kernel@kolivas.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=npiggin@suse.de \
--cc=pwil3058@bigpond.net.au \
--cc=tglx@linutronix.de \
--cc=torvalds@linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox