From: J Sloan <joe@tmsusa.com>
To: linux kernel <linux-kernel@vger.kernel.org>
Cc: J Sloan <joe@tmsusa.com>
Subject: Re: 2.5.8 final - another data point
Date: Sun, 14 Apr 2002 22:46:43 -0700 [thread overview]
Message-ID: <3CBA6943.4000701@tmsusa.com> (raw)
In-Reply-To: <3CB9EF57.6010609@tmsusa.com>
J Sloan wrote:
> Observations -
>
> The up-fix for the setup_per_cpu_areas compile
> issue apparently didn't make it into 2.5.8-final,
> so we had to apply the patch from 2.5.8-pre3
> to get it to compile.
>
> That said, however, everything works, all services
> are running, all devices working, Xfree is happy.
Stop me if you've heard this one before -
But there is one additional observation:
dbench performance has regressed significantly
since 2.5.8-pre1; the performance is equivalent
up to 8 instances, but at 16 and above, 2.5.8 final
takes a nosedive. Performance at 128 instances
is approximately 20% of the throughput of
2.5.8-pre1 - which is in turn not up to 2.4.xx
performance levels. I realize that the BIO has
been through heavy surgery, and nowhere near
optimized, but this is just a data point...
hdparm -t shows normal performance levels,
for what it's worth
2.5.8-pre1
--------------
Throughput 151.152 MB/sec (NB=188.94 MB/sec 1511.52 MBit/sec) 1 procs
Throughput 152.177 MB/sec (NB=190.221 MB/sec 1521.77 MBit/sec) 2 procs
Throughput 151.965 MB/sec (NB=189.957 MB/sec 1519.65 MBit/sec) 4 procs
Throughput 151.068 MB/sec (NB=188.835 MB/sec 1510.68 MBit/sec) 8 procs
Throughput 43.0191 MB/sec (NB=53.7738 MB/sec 430.191 MBit/sec) 16 procs
Throughput 9.65171 MB/sec (NB=12.0646 MB/sec 96.5171 MBit/sec) 32 procs
Throughput 37.8267 MB/sec (NB=47.2833 MB/sec 378.267 MBit/sec) 64 procs
Throughput 14.0459 MB/sec (NB=17.5573 MB/sec 140.459 MBit/sec) 80 procs
Throughput 16.2971 MB/sec (NB=20.3714 MB/sec 162.971 MBit/sec) 128 procs
2.5.8-final
---------------
Throughput 152.948 MB/sec (NB=191.185 MB/sec 1529.48 MBit/sec) 1 procs
Throughput 151.597 MB/sec (NB=189.497 MB/sec 1515.97 MBit/sec) 2 procs
Throughput 150.377 MB/sec (NB=187.972 MB/sec 1503.77 MBit/sec) 4 procs
Throughput 150.159 MB/sec (NB=187.698 MB/sec 1501.59 MBit/sec) 8 procs
Throughput 7.25691 MB/sec (NB=9.07113 MB/sec 72.5691 MBit/sec) 16 procs
Throughput 6.36332 MB/sec (NB=7.95415 MB/sec 63.6332 MBit/sec) 32 procs
Throughput 5.55008 MB/sec (NB=6.9376 MB/sec 55.5008 MBit/sec) 64 procs
Throughput 5.82333 MB/sec (NB=7.27916 MB/sec 58.2333 MBit/sec) 80 procs
Throughput 3.40741 MB/sec (NB=4.25926 MB/sec 34.0741 MBit/sec) 128 procs
next prev parent reply other threads:[~2002-04-15 5:46 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2002-04-14 21:06 2.5.8 final - J Sloan
2002-04-15 5:46 ` J Sloan [this message]
2002-04-15 6:35 ` 2.5.8 final - another data point J Sloan
2002-04-15 7:27 ` Andrew Morton
2002-04-15 8:02 ` J Sloan
2002-04-15 7:18 ` Andrew Morton
2002-04-15 8:14 ` J Sloan
2002-04-15 14:15 ` 2.5.8 final - Luigi Genoni
2002-04-15 14:55 ` David S. Miller
-- strict thread matches above, loose matches on Subject: below --
2002-04-16 12:42 2.5.8 final - another data point rwhron
2002-04-16 18:31 ` J Sloan
2002-04-16 21:48 rwhron
2002-04-16 22:02 ` Andrew Morton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3CBA6943.4000701@tmsusa.com \
--to=joe@tmsusa.com \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox