* [BENCHMARK] 2.5.59-mm6 with contest
@ 2003-01-31 5:52 Con Kolivas
2003-01-31 7:38 ` Andrew Morton
0 siblings, 1 reply; 2+ messages in thread
From: Con Kolivas @ 2003-01-31 5:52 UTC (permalink / raw)
To: linux kernel mailing list; +Cc: Andrew Morton, Nick Piggin
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Here are contest (http://contest.kolivas.net) benchmark results with the osdl
hardware (http://www.osdl.org) for 2.5.59-mm6 (reconfigured hardware again to
get most useful results with contest). These results have been checked for
accuracy, repeatability and asterisks have been placed next to statistically
significant differences.
I do believe these show that sequential reads are indeed scheduled before
writes with this kernel. The question is, how long should they be scheduled
for?
no_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.59 3 79 94.9 0 0.0 1.00
2.5.59-mm6 1 78 96.2 0 0.0 1.00
cacherun:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.59 3 76 98.7 0 0.0 0.96
2.5.59-mm6 1 76 97.4 0 0.0 0.97
process_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.59 3 92 81.5 28 16.3 1.16
2.5.59-mm6 1 92 81.5 25 15.2 1.18
ctar_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.59 3 98 80.6 2 5.1 1.24*
2.5.59-mm6 3 112 70.5 2 4.5 1.44*
xtar_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.59 3 101 75.2 1 4.0 1.28*
2.5.59-mm6 3 115 66.1 1 4.3 1.47*
io_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.59 3 153 50.3 8 13.7 1.94*
2.5.59-mm6 3 106 70.8 4 9.4 1.36*
read_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.59 3 102 76.5 5 4.9 1.29*
2.5.59-mm6 3 733 10.8 56 6.3 9.40*
list_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.59 3 95 80.0 0 6.3 1.20*
2.5.59-mm6 3 97 79.4 0 6.2 1.24*
mem_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.59 3 97 80.4 56 2.1 1.23
2.5.59-mm6 3 94 83.0 50 2.1 1.21
dbench_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.59 3 126 60.3 3 22.2 1.59
2.5.59-mm6 3 122 61.5 3 25.4 1.56
io_other:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.59 3 89 84.3 2 5.5 1.13
2.5.59-mm6 2 90 83.3 2 6.7 1.15
io_load result is excellent showing the continuous write is delaying kernel
compilation by much less. read_load tells the rest of the story though.
read_load repeatedly reads a 256Mb file (the size of physical ram in the test
machine)
Con
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.0 (GNU/Linux)
iD8DBQE+Og8RF6dfvkL3i1gRAvLaAJ96HIePSeQ3TasNr8o19fzJGOyUUwCfTM4w
UKY8C9r2/2F5e4rrv9yOx7g=
=y8wz
-----END PGP SIGNATURE-----
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [BENCHMARK] 2.5.59-mm6 with contest
2003-01-31 5:52 [BENCHMARK] 2.5.59-mm6 with contest Con Kolivas
@ 2003-01-31 7:38 ` Andrew Morton
0 siblings, 0 replies; 2+ messages in thread
From: Andrew Morton @ 2003-01-31 7:38 UTC (permalink / raw)
To: Con Kolivas; +Cc: linux-kernel, piggin
Con Kolivas <conman@kolivas.net> wrote:
>
> I do believe these show that sequential reads are indeed scheduled before
> writes with this kernel. The question is, how long should they be scheduled
> for?
No, Nick has some logic in there which remembers the last-submitted sector
and treats that as an insertion candidate as well. That's unrelated to
the anticipatory code.
Which is all fine, but it needs some taming to prevent the obvious starvation
which can happen.
I'm going to nobble that for now. There have been enormous gyrations in the
behaviour of the IO scheduler in recent months and I wish to get it settled
down to good all-round behaviour _without_ the anticipatory scheduler in
place. Because there are fairly hairy issues surrounding the anticipatory
scheduler and device drivers - anticipatory scheduling may not get there.
I have spent some time this week tuning up and fixing the non-anticipatory
I/O scheduler and I'd like to stabilise that code for a while, to provide a
decent baseline against which to continue the anticipatory development.
In fact, the tuned-up scheduler performs respectably against the anticipatory
code. Which isn't theoretically correct, and indicates that the anticipatory
code can be optimised further.
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2003-01-31 7:28 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2003-01-31 5:52 [BENCHMARK] 2.5.59-mm6 with contest Con Kolivas
2003-01-31 7:38 ` Andrew Morton
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox