From: Andreas Peter <ujq7@rz.uni-karlsruhe.de>
To: linux-kernel@vger.kernel.org
Subject: SW-RAID0 Performance problems
Date: Fri, 13 Apr 2001 13:47:30 +0200 [thread overview]
Message-ID: <01041313473002.00533@debian> (raw)
Hi,
I've successfully set up SW-RAID0 with Kernel 2.4.3 and Raidtools 0.9.
I did this to increase the performance of my HD, but nothig happens.
The hdparm results:
hdparm -t /dev/md0 : 20.25 MB/sec
hdparm -t /dev/hda : 20.51 MB/sec
hdaprm -t /dev/hdc : 20.71 MB/sec
I thougt the performnace of RAID0 should near 40MB/sec.
I played with different chunk-sizes, but the result was everytime the same.
The drives are both Maxtor DiamondMax VL40, 30GB, DMA on.
No other drive is attached on the bus.
Here are also some bonnie++ results:
-- RAID-0 --
-- chunk-size=16 --
Version 1.01 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
debian 1G 7416 99 14277 20 7498 10 6942 90 27007 20 113.0 1
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 267 99 +++++ +++ 10968 100 269 99 +++++ +++ 1388 99
-- chunk-size=32 --
Version 1.01 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
debian 1G 7396 99 14075 20 7469 10 6945 90 26960 20 133.7 1
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 265 100 +++++ +++ 10695 99 267 99 +++++ +++ 1447 100
-- Single HD /dev/hdc1 --
Version 1.01 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
debian 1G 7173 96 11055 13 5038 6 5999 78 29146 21 90.7 1
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 272 100 +++++ +++ 10482 99 274 99 +++++ +++ 1437 100
Are there known performanace problems with 2.4.3, or is it necessary to
apply patches to the kernel?
Or did I something wrong??
Thank you for every hint!
Andreas
--
Andreas Peter *** ujq7@rz.uni-karlsruhe.de
next reply other threads:[~2001-04-13 11:40 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2001-04-13 11:47 Andreas Peter [this message]
2001-04-13 16:01 ` SW-RAID0 Performance problems Jakob Østergaard
2001-04-13 16:28 ` Andreas Peter
[not found] <Pine.LNX.4.10.10104131048550.1669-100000@coffee.psychology.mcmaster.ca>
2001-04-13 15:36 ` Andreas Peter
2001-04-13 16:07 ` David Rees
2001-04-13 16:28 ` Andreas Peter
2001-04-14 7:04 ` David Rees
2001-04-14 9:38 ` Andreas Peter
2001-04-14 12:28 ` Kurt Roeckx
2001-04-14 13:09 ` Andreas Peter
2001-04-13 18:11 ` Tim Moore
2001-04-14 10:45 ` Andreas Peter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=01041313473002.00533@debian \
--to=ujq7@rz.uni-karlsruhe.de \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox