From: Con Kolivas <conman@kolivas.net>
To: linux kernel mailing list <linux-kernel@vger.kernel.org>
Cc: Robert Love <rml@tech9.net>
Subject: [BENCHMARK] scheduler tunables with contest - prio_bonus_ratio
Date: Fri, 20 Dec 2002 08:50:27 +1100 [thread overview]
Message-ID: <200212200850.32886.conman@kolivas.net> (raw)
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
contest results, osdl hardware, scheduler tunable prio_bonus_ratio; default
value (2.5.52-mm1) is 25; these results are interesting.
noload:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.52-mm1 [8] 39.7 180 0 0 1.10
pri_bon00 [3] 40.6 180 0 0 1.12
pri_bon10 [3] 40.2 180 0 0 1.11
pri_bon30 [3] 39.7 181 0 0 1.10
pri_bon50 [3] 40.0 179 0 0 1.10
cacherun:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.52-mm1 [7] 36.9 194 0 0 1.02
pri_bon00 [3] 37.6 194 0 0 1.04
pri_bon10 [3] 37.2 194 0 0 1.03
pri_bon30 [3] 36.9 194 0 0 1.02
pri_bon50 [3] 36.7 195 0 0 1.01
process_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.52-mm1 [7] 49.0 144 10 50 1.35
pri_bon00 [3] 47.5 152 9 41 1.31
pri_bon10 [3] 48.2 147 10 47 1.33
pri_bon30 [3] 50.1 141 12 53 1.38
pri_bon50 [3] 46.2 154 8 39 1.28
Seems to subtly affect the balance here.
ctar_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.52-mm1 [7] 55.5 156 1 10 1.53
pri_bon00 [3] 44.6 165 0 5 1.23
pri_bon10 [3] 45.5 164 0 7 1.26
pri_bon30 [3] 52.0 154 1 10 1.44
pri_bon50 [3] 57.5 158 1 10 1.59
Seems to be a direct relationship; pb up, time up
xtar_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.52-mm1 [7] 77.4 122 1 8 2.14
pri_bon00 [3] 60.6 125 0 7 1.67
pri_bon10 [3] 61.7 125 1 8 1.70
pri_bon30 [3] 74.8 128 1 9 2.07
pri_bon50 [3] 74.5 130 1 8 2.06
when pb goes up, time goes up, but maxes out
io_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.52-mm1 [7] 80.5 108 10 19 2.22
pri_bon00 [3] 120.3 94 22 24 3.32
pri_bon10 [3] 123.6 91 20 23 3.41
pri_bon30 [3] 95.8 84 14 20 2.65
pri_bon50 [3] 76.8 114 11 21 2.12
when pb goes up, time goes down (large effect)
io_other:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.52-mm1 [7] 60.1 131 7 18 1.66
pri_bon00 [3] 142.8 94 27 26 3.94
pri_bon10 [3] 116.5 93 22 26 3.22
pri_bon30 [3] 72.8 115 8 19 2.01
pri_bon50 [3] 99.8 97 15 22 2.76
similar to io_load, not quite linear
read_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.52-mm1 [7] 49.9 149 5 6 1.38
pri_bon00 [3] 48.3 154 2 3 1.33
pri_bon10 [3] 49.5 150 5 6 1.37
pri_bon30 [3] 50.7 148 5 6 1.40
pri_bon50 [3] 49.8 149 5 6 1.38
list_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.52-mm1 [7] 43.8 167 0 9 1.21
pri_bon00 [3] 43.7 168 0 7 1.21
pri_bon10 [3] 44.0 167 0 8 1.22
pri_bon30 [3] 44.0 166 0 9 1.22
pri_bon50 [3] 43.8 167 0 9 1.21
mem_load:
Kernel [runs] Time CPU% Loads LCPU% Ratio
2.5.52-mm1 [7] 71.1 123 36 2 1.96
pri_bon00 [3] 78.8 98 33 2 2.18
pri_bon10 [3] 94.0 82 35 2 2.60
pri_bon30 [3] 108.6 74 36 2 3.00
pri_bon50 [3] 106.2 75 36 2 2.93
in the opposite direction to io_load; as pb goes up, time goes up, but
mem_load achieves no more work.
Changing this tunable seems to shift the balance in either direction depending
on the load. Most of the disk writing loads have shorter times as pb goes up,
but under heavy mem_load the time goes up (without an increase in the amount
of work done by the mem_load itself). The effect is quite large.
Con
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.0 (GNU/Linux)
iD8DBQE+Aj8nF6dfvkL3i1gRAuJOAKCYVUsr4tii1akA996c/XVqdCizuQCfQi+a
QtX8sg1Q1KA2VI6eY+X5GtM=
=QlX7
-----END PGP SIGNATURE-----
next reply other threads:[~2002-12-19 21:40 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2002-12-19 21:50 Con Kolivas [this message]
2002-12-19 22:46 ` [BENCHMARK] scheduler tunables with contest - prio_bonus_ratio Robert Love
2002-12-19 23:18 ` Andrew Morton
2002-12-19 23:41 ` Robert Love
2002-12-20 0:02 ` Andrew Morton
2002-12-20 0:15 ` Robert Love
2002-12-20 0:22 ` Con Kolivas
2002-12-20 0:29 ` Robert Love
2002-12-20 0:27 ` Andrew Morton
2002-12-20 2:42 ` Robert Love
2002-12-20 2:48 ` Andrew Morton
2002-12-24 22:26 ` scott thomason
2002-12-25 7:29 ` Con Kolivas
2002-12-25 16:17 ` scott thomason
2002-12-26 15:01 ` scott thomason
2003-01-01 0:31 ` Impact of scheduler tunables on interactive response (was Re: [BENCHMARK] scheduler tunables with contest - prio_bonus_ratio) scott thomason
2003-01-01 16:05 ` Bill Davidsen
2003-01-01 17:15 ` scott thomason
2002-12-19 23:42 ` [BENCHMARK] scheduler tunables with contest - prio_bonus_ratio Con Kolivas
2002-12-19 23:53 ` Robert Love
2002-12-20 0:04 ` Con Kolivas
2002-12-20 0:16 ` Robert Love
2002-12-20 11:17 ` Marc-Christian Petersen
2002-12-20 17:54 ` Robert Love
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=200212200850.32886.conman@kolivas.net \
--to=conman@kolivas.net \
--cc=linux-kernel@vger.kernel.org \
--cc=rml@tech9.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox