From: "Keld Jørn Simonsen" <keld@dkuug.dk>
To: Gregory Leblanc <gleblanc@linuxweasel.com>
Cc: "Keld Jørn Simonsen" <keld@dkuug.dk>, linux-raid@vger.kernel.org
Subject: Re: strange performance of raid0
Date: Wed, 18 Feb 2004 00:26:41 +0100 [thread overview]
Message-ID: <20040217232640.GD12078@rap.rap.dk> (raw)
In-Reply-To: <4032545D.4070105@linuxweasel.com>
On Tue, Feb 17, 2004 at 09:50:21AM -0800, Gregory Leblanc wrote:
> Keld Jørn Simonsen wrote:
>
> >Hi
> >
> >I have some strange performance results on a raid0
> >I have 4 IDE disks on two controllers, the one on the
> >motherboard of the duron 1 GHz mackine, the other a promise TX2 plus
> >SATA + PATA controller. I run kernel 2.4.22
> >
> >The disks and hdparm -t on each of them
> >
> >/dev/hdc1 seagate 80 GB 16 MB/s
> >/dev/sda1 maxtor sata 200 GB 50 MB/s
> >/dev/sdb7 maxtor 160 GB 54 MB/s
>
> Well, if you're lucky, these might be useful as comparative numbers
> between the different drives on the same system. Just as likely not,
> though, hdparm rather sucks as a benchmark.
Yes, I only use them as crude benchmark measuers, but they seem
indicative.
> >The partitions are al about 5 GB each.
> >
> >If I make a raid0 device of all of them I get a thruput of 45 MB/s
> >IIf I exclude the hdc1 partition, I get around 75 MB/s.
> >The system is a little loaded - but that would be normal operating
> >conditions. CPU is 90 % idle. I have about 100 MB free RAM.
>
> Let's assume that the above numbers have a basis in reality. :) If
> you've got disks with widely varying speeds, then the best performance
> can often be hand from setting up a linear RAID volume, rather than a
> RAID0. RAID 0 is really designed to have matching disks, as it
> distributes data evenly across them. With Linear, and ext2 (erm, I'm
> assuming 3 as well, I haven't heard anything different), you can
> sometimes get better performance with smaller writes, because ext2
> "scatters" data around the filesystem, in order to avoid fragmentation.
Yes, that would be an idea. Anyway I replaced the hdc1 with hda1 on
a seagate 40 GB disk, which hdparm said could do about 40 MB/s,
and then - when I was lucky, I could actually get about 120 MB/s
thruput on a "cat file >/dev/null", 681 MB in 5.63 secs.
This on a mildly loaded production system. Not bad!
Then I am beginning to hit the 1 Gbit/s limit on the PCI bus.
Best regards
Keld
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
prev parent reply other threads:[~2004-02-17 23:26 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2004-02-15 10:04 strange performance of raid0 Keld Jørn Simonsen
2004-02-15 9:54 ` Matt Thrailkill
2004-02-15 12:05 ` Keld Jørn Simonsen
2004-02-15 11:55 ` Matt Thrailkill
2004-02-15 16:48 ` Keld Jørn Simonsen
2004-02-15 20:35 ` Mark Hahn
2004-02-15 20:49 ` Keld Jørn Simonsen
2004-02-15 23:58 ` Keld Jørn Simonsen
2004-02-17 17:50 ` Gregory Leblanc
2004-02-17 23:26 ` Keld Jørn Simonsen [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20040217232640.GD12078@rap.rap.dk \
--to=keld@dkuug.dk \
--cc=gleblanc@linuxweasel.com \
--cc=linux-raid@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).