* I'm Astounded by How Good Linux Software RAID IS!
@ 2003-11-22 17:07 AndyLiebman
2003-11-24 10:21 ` Hermann Himmelbauer
0 siblings, 1 reply; 4+ messages in thread
From: AndyLiebman @ 2003-11-22 17:07 UTC (permalink / raw)
To: linux-raid
I want to congratulate a lot of Linux Software Raid folks. Really. I just set
up a RAID 5 array on my Linux Machine (P4-3.06 Ghz -- Mandrake 9.2) using 5
External Firewire Drives.
The performance is SO GOOD that I am able to write uncompressed 8-bit video
files to my array through a Copper Gigabit network! That's a sustained 18
MB/sec -- going for 20 minutes straight.
Under Windows 2000 using Veritas Volume Manger's RAID 5, I was barely able to
sustain 4 MB/sec writing to the exact same drives -- even when I was writing
directly from the Win 2000 server and not going through my network (Reading,
however, was around 20 MB/sec). With Linux, I'm getting about 20 MB/sec through
my TCP/IP network! And the Linux box isn't even struggling. Wow.
I AM surprised by one thing, though. This is the second firewire array that I
have put on my Linux machine.
The first one was set up with 6 Firewire Drives that are bigger (200 GB
versus 120 GB) and that have larger onboard cache (8 MB versus 2 MB). I set up
those 6 drives as a RAID 10 array -- 3 mirrored pairs with a RAID 0 stripe on top
of that. The performance I was able to achieve with the RAID 10 array was
actually NO BETTER than what I am getting with RAID 5. Does that make sense?
Should I be getting better (or at least equal) performance with RAID 5
compared to RAID 10? Shouldn't RAID 10 be faster both reading and writing?
In both cases, I am using 128K chunk sizes, and the xfs filesystem with the
maximum allowable Linux block size (4096) and an INTERNAL logfile.
I would love to hear your comments.
Andy Liebman
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: I'm Astounded by How Good Linux Software RAID IS!
2003-11-22 17:07 I'm Astounded by How Good Linux Software RAID IS! AndyLiebman
@ 2003-11-24 10:21 ` Hermann Himmelbauer
0 siblings, 0 replies; 4+ messages in thread
From: Hermann Himmelbauer @ 2003-11-24 10:21 UTC (permalink / raw)
To: AndyLiebman, linux-raid
On Saturday 22 November 2003 18:07, AndyLiebman@aol.com wrote:
> I want to congratulate a lot of Linux Software Raid folks. Really. I just
> set up a RAID 5 array on my Linux Machine (P4-3.06 Ghz -- Mandrake 9.2)
> using 5 External Firewire Drives.
>
> The performance is SO GOOD that I am able to write uncompressed 8-bit video
> files to my array through a Copper Gigabit network! That's a sustained 18
> MB/sec -- going for 20 minutes straight.
>
> The first one was set up with 6 Firewire Drives that are bigger (200 GB
> versus 120 GB) and that have larger onboard cache (8 MB versus 2 MB). I set
> up those 6 drives as a RAID 10 array -- 3 mirrored pairs with a RAID 0
> stripe on top of that. The performance I was able to achieve with the RAID
> 10 array was actually NO BETTER than what I am getting with RAID 5. Does
> that make sense?
Hmmm, here are some of my thoughts, maybe some of my assumptions are wrong as
I am no RAID expert. If so, please correct me!
- Maximum FW-Speed = 400Mbit/s, that's with the protocol overhead ~ 35Mb/s
- Theoretical PCI-Bandwidth: 133 MB/s
O.k., let's calculate a little bit, but only for large file disk writes, reads
are probably not that easy to calculate:
1) RAID5: When writing a block, the actual data written is data*(5/4), but the
data is spreaded over all 5 disks, therefore it should theoretically perform
like a 4-disk RAID0. Practically there is probably a performance degration.
2) RAID10: When writing a block, the actual data written is data*2 as every
data chunk is mirrored. The performance gain is like a 3-disk RAID0.
So, theoretically the RAID5 should be faster but has a worse data reliability
which could be improved by a hot spare.
Anyway, due to the limitation of the FW, you will never gain a higher
throughput than the ~ 35Mb/s, moreover keep in mind that the transfer speed
between the HD-interface (cache) and the CPU can also never exceed this limit
which degrades your performance, probably especially the read performance.
When it comes to the PCI-bus, the load is higher with the RAID-10 solution, as
the data that should be written is doubled. But it does not seem that the
PCI-bus is a bottleneck in this system.
Another thought to Gigabit Ethernet: 32-bit PCI Ethernet NIC's are known to be
quite slow, often they provide not much more than 20-30Mb/s. Morover if you
(mis)use your 32-bit PCI-bus for Gigabit Ethernet you will probably degrade
your RAID performance as the PCI bus gets saturated. For Gigabit Ethernet you
better use this Intel CSA-solution like found in the 875 chipsets or you have
a motherboard with a 64-bit PCI or PCI-X bus (which is expensive). *Maybe*
Nvidia also has an CSA-equivalent solution in its nForce3 Chipset, but I
could not find any specs about this.
Moreover I would also check the CPU-load which can also degrade performance as
RAID5 needs CPU-speed and Gigabit-Ethernet (protocol etc.) can also use a lot
of CPU.
Best Regards,
Hermann
--
x1@aon.at
GPG key ID: 299893C7 (on keyservers)
FP: 0124 2584 8809 EF2A DBF9 4902 64B4 D16B 2998 93C7
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: I'm Astounded by How Good Linux Software RAID IS!
@ 2003-11-24 13:47 AndyLiebman
2003-11-24 23:45 ` Hermann Himmelbauer
0 siblings, 1 reply; 4+ messages in thread
From: AndyLiebman @ 2003-11-24 13:47 UTC (permalink / raw)
To: dusty, linux-raid
Thanks for your comments.
> I want to congratulate a lot of Linux Software Raid folks. Really. I just
> set up a RAID 5 array on my Linux Machine (P4-3.06 Ghz -- Mandrake 9.2)
> using 5 External Firewire Drives.
>
> The performance is SO GOOD that I am able to write uncompressed 8-bit video
> files to my array through a Copper Gigabit network! That's a sustained 18
> MB/sec -- going for 20 minutes straight.
>
> The first one was set up with 6 Firewire Drives that are bigger (200 GB
> versus 120 GB) and that have larger onboard cache (8 MB versus 2 MB). I set
> up those 6 drives as a RAID 10 array -- 3 mirrored pairs with a RAID 0
> stripe on top of that. The performance I was able to achieve with the RAID
> 10 array was actually NO BETTER than what I am getting with RAID 5. Does
> that make sense?
Hmmm, here are some of my thoughts, maybe some of my assumptions are wrong as
I am no RAID expert. If so, please correct me!
- Maximum FW-Speed = 400Mbit/s, that's with the protocol overhead ~ 35Mb/s
- Theoretical PCI-Bandwidth: 133 MB/s
Do you think this 35 MB/sec applies, even though I am using 5 separate
Firewire PCI cards? That's 5 separate channels, each of which has a 400 Mbit/sec
speed. Keep in mind that EACH drive on the RAID 5 is on it's own separate PCI
card.
O.k., let's calculate a little bit, but only for large file disk writes,
reads
are probably not that easy to calculate:
1) RAID5: When writing a block, the actual data written is data*(5/4), but
the
data is spreaded over all 5 disks, therefore it should theoretically perform
like a 4-disk RAID0. Practically there is probably a performance degration.
2) RAID10: When writing a block, the actual data written is data*2 as every
data chunk is mirrored. The performance gain is like a 3-disk RAID0.
So, theoretically the RAID5 should be faster but has a worse data reliability
which could be improved by a hot spare.
Anyway, due to the limitation of the FW, you will never gain a higher
throughput than the ~ 35Mb/s, moreover keep in mind that the transfer speed
between the HD-interface (cache) and the CPU can also never exceed this limit
which degrades your performance, probably especially the read performance.
When it comes to the PCI-bus, the load is higher with the RAID-10 solution,
as
the data that should be written is doubled. But it does not seem that the
PCI-bus is a bottleneck in this system.
Another thought to Gigabit Ethernet: 32-bit PCI Ethernet NIC's are known to
be
quite slow, often they provide not much more than 20-30Mb/s. Morover if you
(mis)use your 32-bit PCI-bus for Gigabit Ethernet you will probably degrade
your RAID performance as the PCI bus gets saturated. For Gigabit Ethernet you
better use this Intel CSA-solution like found in the 875 chipsets or you have
a motherboard with a 64-bit PCI or PCI-X bus (which is expensive). *Maybe*
Nvidia also has an CSA-equivalent solution in its nForce3 Chipset, but I
could not find any specs about this.
I understand that a 64 bit PCI bus and card would be better. But there's no
point at the moment unless I increase the read and write speed to the RAID
array.
There are other variables too. When I tested the RAID 10 array, I got much
better performance with xfs filesystem compared to ext3. With xfs, it didn't
seem to make any difference whether I had an external or internal log file.
However, I have read that with RAID 5 an external log is a big advantage. I'm
going to try that.
But I also read in SGI's xfs documentation that external logs don't work
unless your volumes are created using some kind of logical volume manager -- and
not with partitions. But maybe that rule only applies to xfs on UNIX and not to
xfs on LINUX. There is no mention of LINUX in the SGI documentation.
Moreover I would also check the CPU-load which can also degrade performance
as
CPU load is all over the map. I am using hyperthreading support (I have a
P4-3.06 Ghz) -- but load on CPU 1 ranges up and down from 20 to 100 percent,
mostly staying around 40 to 60 percent. CPU 2 is down at 2 to 5 percent. I have
tried disabling hyperthreading in the BIOS and I get the same performance.
Hyperthreading just seems to allow me to do other things at the same time.
RAID5 needs CPU-speed and Gigabit-Ethernet (protocol etc.) can also use a lot
of CPU.
Best Regards,
Hermann
^ permalink raw reply [flat|nested] 4+ messages in thread* Re: I'm Astounded by How Good Linux Software RAID IS!
2003-11-24 13:47 AndyLiebman
@ 2003-11-24 23:45 ` Hermann Himmelbauer
0 siblings, 0 replies; 4+ messages in thread
From: Hermann Himmelbauer @ 2003-11-24 23:45 UTC (permalink / raw)
To: AndyLiebman, linux-raid
On Monday 24 November 2003 14:47, AndyLiebman@aol.com wrote:
>> Hmmm, here are some of my thoughts, maybe some of my assumptions are wrong
>> as I am no RAID expert. If so, please correct me!
>>
>> - Maximum FW-Speed = 400Mbit/s, that's with the protocol overhead ~ 35Mb/s
>> - Theoretical PCI-Bandwidth: 133 MB/s
>
>
> Do you think this 35 MB/sec applies, even though I am using 5 separate
> Firewire PCI cards? That's 5 separate channels, each of which has a 400
> Mbit/sec speed. Keep in mind that EACH drive on the RAID 5 is on it's own
> separate PCI card.
I already assumed that you use one drive on each channel. Nevertheless, the
maximum throughput from a drive to the CPU is limited by the 400Mbit/s
Firewire speed, that's around 35Mbyte/s.
On a first glance this is no limitation as normal IDE-Drives don't deliver
much more than that but the interface speed can be much higher - and that's
my point: Transfers from the internal harddisk cache to the CPU are also
limited by the firewire speed.
Best Regards,
Hermann
--
x1@aon.at
GPG key ID: 299893C7 (on keyservers)
FP: 0124 2584 8809 EF2A DBF9 4902 64B4 D16B 2998 93C7
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2003-11-24 23:45 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2003-11-22 17:07 I'm Astounded by How Good Linux Software RAID IS! AndyLiebman
2003-11-24 10:21 ` Hermann Himmelbauer
-- strict thread matches above, loose matches on Subject: below --
2003-11-24 13:47 AndyLiebman
2003-11-24 23:45 ` Hermann Himmelbauer
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).