* IDE Performance issues
@ 2003-09-01 21:12 Theepan
0 siblings, 0 replies; 4+ messages in thread
From: Theepan @ 2003-09-01 21:12 UTC (permalink / raw)
To: linux-raid, linux-ide
Hi there,
I have a RAID5 setup with 4 x 120GB IDE disks and I have always wondered why
the performance of my RAID set never exceeded 60-66MB/s when a lot of people
talk about their RAID setup sustaining 80-100MB/s (and then some more).
Until recently I've just blamed the overhead that software RAID brings,
namely because it is software and takes up CPU cycles and I don't have the
fastest processors in the world. After a lot of tests, I found out that
RAID5 is not to blame but the IDE/PCI layer on the other hand is.
zcav, a tool from bonnie++ package, was used to measure the sequential read
rate. All disks were able to sustain minimum 30MB/s when accessed
seperately. When accessed simultaneously I get these results:
When accessing 1 disk, the read rate was 30MB/s (as expected).
When accessing 2 disks, the read rate was 60MB/s (30MB/s on each disk, as
expected)
When accessing 3 disks, the read rate was 60MB/s (20MB/s on each disk,
dropping 10Mb/s)
When accessing 4 disks, the read rate was 60MB/s (15MB/s on each disk,
dropping 15Mb/s)
When accessing md1 (RAID5 with 4 disks), the read rate was 55-63MB/s.
A quick look at the results show that the maximum performance achived was
60MB/s no matter how many disks (2+) were involved - the same performance
you could/would expect from a ATA66 system. On a 32bit system where the PCI
bus operates as 33mhz, one would expect the maximum theoretical throughput
to be 133MB/s (or 127,x MB/s).
I have 2 x Promise TX2 ATA133 Controller and 4 Maxtor 120GB disks (3 x
5400RPM and 1 x 7200RPM, all disks are ATA133 capable and operate in this
mode according to hdparm). DMA and 32bit is enabled on the disks using
hdparm
as well. The disks are attached as master only, that is one disk per IDE
channel.
The system is a Dual P3-500mhz wih 256MB RAM.
The motherboard is SOYO SY-D6IBA with 440BX chipset.
I'm running Debian Linux and kernel 2.4.22 (vanilla kernel with only
lm_sensors and i2c patched).
The RAID5 consists of 4 disks with 128KB as chunk-size.
The onboard SCSI is disabled as I have no use of it.
LILO has "ide2=ata66 ide3=ata66 ide4=ata66 ide5=ata66" appended, becase I've
read that this instructs the kernel to use some ATA66+ calls and thereby
utilizing the ATA66 (and higher).
Anyone have any idea what this is about (maybe by first hand experience) and
why? - and preferably also how I would go about "fixing" it.
If there's need for output from lcpci, dmesg etc, let me know - I have them
ready. I do not know if these lists prefer outputs attached or included as
body in the email - which is why I didn't include them at all in any way.
--
Theepan
^ permalink raw reply [flat|nested] 4+ messages in thread
* IDE Performance issues
@ 2003-09-02 5:59 Theepan
2003-09-02 12:19 ` Bartlomiej Zolnierkiewicz
0 siblings, 1 reply; 4+ messages in thread
From: Theepan @ 2003-09-02 5:59 UTC (permalink / raw)
To: linux-ide
Hi there,
(I tried to send his mail to both linux-ide@ and linux-raid@ lists, but it
only arrived at linux-raid@ - I guess one cannot specify multiple targets in
the To: field, which is why I'm resending to this list only. I've also
edited this email to exclude RAID details.)
I am experiencing performance issues with the IDE/PCI subsystem that seems
to limit the "disk performance" to somewhat ATA66 mode when both disks and
ATA controllers are ATA133 capable.
zcav, a tool from bonnie++ package, was used to measure the sequential read
rate. All disks were able to sustain minimum 30MB/s when accessed
seperately. When accessed simultaneously I get these results:
When accessing 1 disk, the read rate was 30MB/s (as expected).
When accessing 2 disks, the read rate was 60MB/s (30MB/s on each disk, as
expected)
When accessing 3 disks, the read rate was 60MB/s (20MB/s on each disk,
dropping 10Mb/s)
When accessing 4 disks, the read rate was 60MB/s (15MB/s on each disk,
dropping 15Mb/s)
A quick look at the results show that the maximum performance achived was
60MB/s no matter how many disks (2+) were involved - the same performance
you could/would expect from a ATA66 system. On a 32bit system where the
PCI bus operates as 33mhz, one would expect the maximum theoretical
throughput to be 133MB/s (or 127,x MB/s).
I have 2 x Promise TX2 ATA133 (PDC20269) Controller and 4 Maxtor 120GB
disks (3 x 5400RPM and 1 x 7200RPM, all disks are ATA133 capable and
operate in this mode according to hdparm). DMA and 32bit is enabled on the
disks using hdparm as well. The disks are attached as master only, that is
one
disk per IDE channel.
The system is a Dual P3-500mhz wih 256MB RAM.
The motherboard is SOYO SY-D6IBA with 440BX chipset.
I'm running Debian Linux and kernel 2.4.22 (vanilla kernel with only
lm_sensors and i2c patched).
The onboard SCSI is disabled as I have no use of it.
LILO has "ide2=ata66 ide3=ata66 ide4=ata66 ide5=ata66" appended, becase I've
read that this instructs the kernel to use some ATA66+ calls and thereby
utilizing the ATA66 (and higher).
Anyone have any idea what this is about (maybe by first hand experience) and
why? - and preferably also how I would go about "fixing" it.
If there's need for output from lcpci, dmesg etc, let me know - I have them
ready. I do not know if these lists prefer outputs attached or included as
body in the email - which is why I didn't include them at all in any way.
--
Theepan
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: IDE Performance issues
2003-09-02 5:59 Theepan
@ 2003-09-02 12:19 ` Bartlomiej Zolnierkiewicz
2003-09-02 13:48 ` Theepan
0 siblings, 1 reply; 4+ messages in thread
From: Bartlomiej Zolnierkiewicz @ 2003-09-02 12:19 UTC (permalink / raw)
To: Theepan; +Cc: linux-ide
On Tuesday 02 of September 2003 07:59, Theepan wrote:
> Hi there,
>
> (I tried to send his mail to both linux-ide@ and linux-raid@ lists, but it
> only arrived at linux-raid@ - I guess one cannot specify multiple targets
> in the To: field, which is why I'm resending to this list only. I've also
> edited this email to exclude RAID details.)
I think I've seen this mail on linux-ide@ previously.
> I am experiencing performance issues with the IDE/PCI subsystem that seems
> to limit the "disk performance" to somewhat ATA66 mode when both disks and
> ATA controllers are ATA133 capable.
>
> zcav, a tool from bonnie++ package, was used to measure the sequential read
> rate. All disks were able to sustain minimum 30MB/s when accessed
> seperately. When accessed simultaneously I get these results:
>
> When accessing 1 disk, the read rate was 30MB/s (as expected).
> When accessing 2 disks, the read rate was 60MB/s (30MB/s on each disk, as
> expected)
> When accessing 3 disks, the read rate was 60MB/s (20MB/s on each disk,
> dropping 10Mb/s)
> When accessing 4 disks, the read rate was 60MB/s (15MB/s on each disk,
> dropping 15Mb/s)
Have you tried with RAID0?
What makes you think its IDE issue, not a RAID one?
> A quick look at the results show that the maximum performance achived was
> 60MB/s no matter how many disks (2+) were involved - the same performance
> you could/would expect from a ATA66 system. On a 32bit system where the
> PCI bus operates as 33mhz, one would expect the maximum theoretical
> throughput to be 133MB/s (or 127,x MB/s).
>
> I have 2 x Promise TX2 ATA133 (PDC20269) Controller and 4 Maxtor 120GB
> disks (3 x 5400RPM and 1 x 7200RPM, all disks are ATA133 capable and
> operate in this mode according to hdparm). DMA and 32bit is enabled on the
> disks using hdparm as well. The disks are attached as master only, that is
> one
> disk per IDE channel.
>
> The system is a Dual P3-500mhz wih 256MB RAM.
> The motherboard is SOYO SY-D6IBA with 440BX chipset.
> I'm running Debian Linux and kernel 2.4.22 (vanilla kernel with only
> lm_sensors and i2c patched).
> The onboard SCSI is disabled as I have no use of it.
>
>
> LILO has "ide2=ata66 ide3=ata66 ide4=ata66 ide5=ata66" appended, becase
> I've read that this instructs the kernel to use some ATA66+ calls and
> thereby utilizing the ATA66 (and higher).
>
> Anyone have any idea what this is about (maybe by first hand experience)
> and why? - and preferably also how I would go about "fixing" it.
>
>
> If there's need for output from lcpci, dmesg etc, let me know - I have them
> ready. I do not know if these lists prefer outputs attached or included as
> body in the email - which is why I didn't include them at all in any way.
>
>
> --
> Theepan
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: IDE Performance issues
2003-09-02 12:19 ` Bartlomiej Zolnierkiewicz
@ 2003-09-02 13:48 ` Theepan
0 siblings, 0 replies; 4+ messages in thread
From: Theepan @ 2003-09-02 13:48 UTC (permalink / raw)
To: Bartlomiej Zolnierkiewicz; +Cc: linux-ide
From: "Bartlomiej Zolnierkiewicz" <B.Zolnierkiewicz@elka.pw.edu.pl>
To: "Theepan" <tornado@linuxfromscratch.org>
Cc: <linux-ide@vger.kernel.org>
Sent: Tuesday, September 02, 2003 2:19 PM
Subject: Re: IDE Performance issues
> > (I tried to send his mail to both linux-ide@ and linux-raid@ lists, but
it
> > only arrived at linux-raid@ - I guess one cannot specify multiple
targets
> > in the To: field, which is why I'm resending to this list only. I've
also
> > edited this email to exclude RAID details.)
>
> I think I've seen this mail on linux-ide@ previously.
The reason I thought it never arrived was simply because I didn't get this
e-mail back myself. It seems that I have been unsubscribed automatically
some time ago. Anyway, I am subscribed again and I didn't want to send
another mail to apologize the doubles, which would only create more traffic.
Consider this e-mail the apology e-mail. :)
> > zcav, a tool from bonnie++ package, was used to measure the sequential
read
> > rate. All disks were able to sustain minimum 30MB/s when accessed
> > seperately. When accessed simultaneously I get these results:
> >
> > When accessing 1 disk, the read rate was 30MB/s (as expected).
> > When accessing 2 disks, the read rate was 60MB/s (30MB/s on each disk,
as
> > expected)
> > When accessing 3 disks, the read rate was 60MB/s (20MB/s on each disk,
> > dropping 10Mb/s)
> > When accessing 4 disks, the read rate was 60MB/s (15MB/s on each disk,
> > dropping 15Mb/s)
>
> Have you tried with RAID0?
> What makes you think its IDE issue, not a RAID one?
It wouldn't do any good trying RAID0 if the underlying layer cannot deliver
the data faster. The 4 above "benchmarks" are performed directly on the
disks rather than on the RAID setup, which eliminates any overhead RAID may
bring.
--
Theepan
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2003-09-02 13:49 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2003-09-01 21:12 IDE Performance issues Theepan
-- strict thread matches above, loose matches on Subject: below --
2003-09-02 5:59 Theepan
2003-09-02 12:19 ` Bartlomiej Zolnierkiewicz
2003-09-02 13:48 ` Theepan
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).