linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Has anyone compared SWRAID for (JBOD@HW-RAID) vs. (Regular Sata PCI-e cards)?
@ 2008-05-10  9:23 Justin Piszcz
  0 siblings, 0 replies; 8+ messages in thread
From: Justin Piszcz @ 2008-05-10  9:23 UTC (permalink / raw)
  To: linux-raid

I was curious if you have for example 5 PCI-e x8 slots:

1. 2x sata port card
2. 2x sata port card
3. 2x sata port card
4. 2x sata port card
5. 2x sata port card

Would that be faster than:

1. 16port 3ware (or) 16port areca
    drives -> jbod

Does anyone here use SW RAID on the second configuration with 10,000 rpm 
drives or more than ~10 drives in a RAID5?  If so what kind of read/write 
speed do you achieve?  I am curious if a single card with that many ports, 
even running jbod can really push the bandwidth you achieve by splitting 
the drives up over multiple PCI-e slots as shown in the first example.

Justin.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Has anyone compared SWRAID for (JBOD@HW-RAID) vs. (Regular Sata PCI-e cards)?
@ 2008-05-10 14:06 David Lethe
  2008-05-10 14:15 ` Justin Piszcz
  0 siblings, 1 reply; 8+ messages in thread
From: David Lethe @ 2008-05-10 14:06 UTC (permalink / raw)
  To: Justin Piszcz, linux-raid

You can't generalize whether HW RAID is faster or slower than SW RAID.
I/O mix, CPU speed, bus type, RAM, file system type/config, queue depth, specific RAID card, drivers, firmware all have significant impact. Even with info you supply, one can easily model config where either RAID architecture will outperform the other. 

if performance is vital for you on a certain pc config, then tune everything for HW RAID, test, rebuild for SW RAID, and compare.  Don't forget to yank power to a disk while testing both to see how each work under stress, as well as when consistency checks run.

-----Original Message-----

From:  "Justin Piszcz" <jpiszcz@lucidpixels.com>
Subj:  Has anyone compared SWRAID for (JBOD@HW-RAID) vs. (Regular Sata PCI-e cards)?
Date:  Sat May 10, 2008 4:23 am
Size:  844 bytes
To:  "linux-raid@vger.kernel.org" <linux-raid@vger.kernel.org>

I was curious if you have for example 5 PCI-e x8 slots: 
 
1. 2x sata port card 
2. 2x sata port card 
3. 2x sata port card 
4. 2x sata port card 
5. 2x sata port card 
 
Would that be faster than: 
 
1. 16port 3ware (or) 16port areca 
    drives -> jbod 
 
Does anyone here use SW RAID on the second configuration with 10,000 rpm  
drives or more than ~10 drives in a RAID5?  If so what kind of read/write  
speed do you achieve?  I am curious if a single card with that many ports,  
even running jbod can really push the bandwidth you achieve by splitting  
the drives up over multiple PCI-e slots as shown in the first example. 
 
Justin. 
-- 
To unsubscribe from this list: send the line "unsubscribe linux-raid" in 
the body of a message to majordomo@vger.kernel.org 
More majordomo info at  http://vger.kernel.org/majordomo-info.html 
 



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Has anyone compared SWRAID for (JBOD@HW-RAID) vs. (Regular Sata PCI-e cards)?
  2008-05-10 14:06 Has anyone compared SWRAID for (JBOD@HW-RAID) vs. (Regular Sata PCI-e cards)? David Lethe
@ 2008-05-10 14:15 ` Justin Piszcz
  2008-05-10 17:14   ` Jeff Garzik
  0 siblings, 1 reply; 8+ messages in thread
From: Justin Piszcz @ 2008-05-10 14:15 UTC (permalink / raw)
  To: David Lethe; +Cc: linux-raid



On Sat, 10 May 2008, David Lethe wrote:

> You can't generalize whether HW RAID is faster or slower than SW RAID.
> I/O mix, CPU speed, bus type, RAM, file system type/config, queue depth, specific RAID card, drivers, firmware all have significant impact. Even with info you supply, one can easily model config where either RAID architecture will outperform the other.
>
> if performance is vital for you on a certain pc config, then tune everything for HW RAID, test, rebuild for SW RAID, and compare.  Don't forget to yank power to a disk while testing both to see how each work under stress, as well as when consistency checks run.

.. What I meant was is JBOD using a single card with 16 ports faster than
using JBOD with multiple PCI-e SATA cards?

Justin.



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Has anyone compared SWRAID for (JBOD@HW-RAID) vs. (Regular Sata PCI-e cards)?
@ 2008-05-10 14:47 David Lethe
  0 siblings, 0 replies; 8+ messages in thread
From: David Lethe @ 2008-05-10 14:47 UTC (permalink / raw)
  To: Justin Piszcz; +Cc: linux-raid

even that depends on numerous variables.  Read vs write percent, are you measuring speed by IOPS or throughput? does your mobo have multiple bus controller chips ( you can saturate the bus depending on what slot/slots you use. Even your ethernet on mobo will compete with disk IO.  multiple cards, across multiple independent pcie busses would be best in high throughput if that is what you need ... But is less important as percnt of random IO increases.  Get specs on mobo & read them carefully.  Also pcie has separate paths for read vs write, so if you balance both you are better off

-----Original Message-----

From:  "Justin Piszcz" <jpiszcz@lucidpixels.com>
Subj:  Re: Has anyone compared SWRAID for (JBOD@HW-RAID) vs. (Regular Sata PCI-e cards)?
Date:  Sat May 10, 2008 9:15 am
Size:  783 bytes
To:  "David Lethe" <david@santools.com>
cc:  "linux-raid@vger.kernel.org" <linux-raid@vger.kernel.org>

 
 
On Sat, 10 May 2008, David Lethe wrote: 
 
> You can't generalize whether HW RAID is faster or slower than SW RAID. 
> I/O mix, CPU speed, bus type, RAM, file system type/config, queue depth, specific RAID card, drivers, firmware all have significant impact. Even with info you supply, one can easily model config where either RAID architecture will outperform the other. 
> 
> if performance is vital for you on a certain pc config, then tune everything for HW RAID, test, rebuild for SW RAID, and compare.  Don't forget to yank power to a disk while testing both to see how each work under stress, as well as when consistency checks run. 
 
.. What I meant was is JBOD using a single card with 16 ports faster than 
using JBOD with multiple PCI-e SATA cards? 
 
Justin. 
 
 
 



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Has anyone compared SWRAID for (JBOD@HW-RAID) vs. (Regular Sata PCI-e cards)?
  2008-05-10 14:15 ` Justin Piszcz
@ 2008-05-10 17:14   ` Jeff Garzik
  2008-05-10 22:28     ` Keld Jørn Simonsen
  0 siblings, 1 reply; 8+ messages in thread
From: Jeff Garzik @ 2008-05-10 17:14 UTC (permalink / raw)
  To: Justin Piszcz; +Cc: David Lethe, linux-raid

Justin Piszcz wrote:
> .. What I meant was is JBOD using a single card with 16 ports faster than
> using JBOD with multiple PCI-e SATA cards?

JBOD on a HW RAID card is really wasting it's primary purpose, 
offloading RAID processing from the CPU, and consolidating large 
transactions.

Using HW RAID-1 means that, for example, _one_ copy of a 4k write to a 
RAID-1 device goes the card, which performs data replication to each 
device.  In SW RAID's case, $N copies cross the PCI bus, one copy for 
each device in the RAID-1 mirror.

In HW RAID-5, one 4k write can go to the card, which then performs the 
parity calculation and data replication.  In SW RAID-5, the parity 
calculation occurs on the host CPU, and $N copies go across the PCI bus.

Running HW RAID in JBOD mode eliminates all the efficiencies of HW RAID 
just listed.  You might as well run normal SATA at that point, because 
you gain additional flexibility and faster performance.

But unless you are maxing out PCI bus bandwidth -- highly unlikely for 
PCI Express unless you have 16 SSDs or so -- you likely won't even 
notice SW RAID's additional PCI bus use.


And of course, there are plenty of other factors to consider.  I wrote a 
bit on this topic at http://linux.yyz.us/why-software-raid.html

	Jeff



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Has anyone compared SWRAID for (JBOD@HW-RAID) vs. (Regular Sata PCI-e cards)?
@ 2008-05-10 18:24 David Lethe
  0 siblings, 0 replies; 8+ messages in thread
From: David Lethe @ 2008-05-10 18:24 UTC (permalink / raw)
  To: Jeff Garzik, Justin Piszcz; +Cc: linux-raid

there are still other factors to consider.  HW raid can usually be configured to monitor and repair bad blocks and data consistency in background with no cpu impact (but allow for bus overhead, depending on architecture.  When things go bad and RAID is in stress, then there is a world of difference between the methodologies. People rarely consider that ... Until they have corruption.  HW RAID (with  battery backup) will rarely corrupt on power failure or os crash, but is not immune.  SW RAID, however, exposes you much more.  Read the threads relating to bugs and data losses on md rebuilds after failures. The md code just can't address certain failure scenarios that HW RAID protects against .. But it still does a good job.  HW RAID is not immune by any means, some controllers have higher ri
 sk of loss then others.  Yes the OP asked for performance diffs, but performance under stress is fair game, as is data integrity.

Think about it ... 100 percent of disks fail, eventually, so data integrity and recovery must be considered.

Neither SW or HW RAID is best or covers all failure scenarios, but please don't make a deployment decision based on performance when everything is working fine.  Testing RAID is one of the things I do, so I speak from authority here. Too many people have blind faith that any kind of parity-protected RAID protects against hardware faults.  This is not the real-world behavior.  



-----Original Message-----

From:  "Jeff Garzik" <jeff@garzik.org>
Subj:  Re: Has anyone compared SWRAID for (JBOD@HW-RAID) vs. (Regular Sata PCI-e cards)?
Date:  Sat May 10, 2008 12:14 pm
Size:  1K
To:  "Justin Piszcz" <jpiszcz@lucidpixels.com>
cc:  "David Lethe" <david@santools.com>; "linux-raid@vger.kernel.org" <linux-raid@vger.kernel.org>

Justin Piszcz wrote: 
> .. What I meant was is JBOD using a single card with 16 ports faster than 
> using JBOD with multiple PCI-e SATA cards? 
 
JBOD on a HW RAID card is really wasting it's primary purpose,  
offloading RAID processing from the CPU, and consolidating large  
transactions. 
 
Using HW RAID-1 means that, for example, _one_ copy of a 4k write to a  
RAID-1 device goes the card, which performs data replication to each  
device.  In SW RAID's case, $N copies cross the PCI bus, one copy for  
each device in the RAID-1 mirror. 
 
In HW RAID-5, one 4k write can go to the card, which then performs the  
parity calculation and data replication.  In SW RAID-5, the parity  
calculation occurs on the host CPU, and $N copies go across the PCI bus. 
 
Running HW RAID in JBOD mode eliminates all the efficiencies of HW RAID  
just listed.  You might as well run normal SATA at that point, because  
you gain additional flexibility and faster performance. 
 
But unless you are maxing out PCI bus bandwidth -- highly unlikely for  
PCI Express unless you have 16 SSDs or so -- you likely won't even  
notice SW RAID's additional PCI bus use. 
 
 
And of course, there are plenty of other factors to consider.  I wrote a  
bit on this topic at http://linux.yyz.us/why-software-raid.html 
 
	Jeff 
 
 
 



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Has anyone compared SWRAID for (JBOD@HW-RAID) vs. (Regular Sata PCI-e cards)?
  2008-05-10 17:14   ` Jeff Garzik
@ 2008-05-10 22:28     ` Keld Jørn Simonsen
  2008-05-11  7:39       ` Keld Jørn Simonsen
  0 siblings, 1 reply; 8+ messages in thread
From: Keld Jørn Simonsen @ 2008-05-10 22:28 UTC (permalink / raw)
  To: Jeff Garzik; +Cc: Justin Piszcz, David Lethe, linux-raid

On Sat, May 10, 2008 at 01:14:02PM -0400, Jeff Garzik wrote:
> 
> And of course, there are plenty of other factors to consider.  I wrote a 
> bit on this topic at http://linux.yyz.us/why-software-raid.html

So that is you writing this page! I think it is one of the more useful
pages on raid (many others are quite outdated). 

I have a suggestion for an advantage for SW raid, that you could
consider for your page:

- potential for increased performance, due to more intelligent layouts,
  for example the linux raid10,f2 has more than double the performance
  for both sequential reads and random reads compared to most hardware
  RAID1, and Linux SW RAID1.

- potential for better error handling, due to more intelligent drivers.
  Linux raid10 has better error handling than linux RAID1 and possibly
  also HW RAID1.

And then, if you could link to our HOWTO pages at
http://linux-raid.osdl.org/ that would improve the visibility for our
pages. It seems like not many are referencing our pages, so if people on
this list would make a link to http://linux-raid.osdl.org/ that would
increase the chance of our information to be seen.

Best regards
keld

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Has anyone compared SWRAID for (JBOD@HW-RAID) vs. (Regular Sata PCI-e cards)?
  2008-05-10 22:28     ` Keld Jørn Simonsen
@ 2008-05-11  7:39       ` Keld Jørn Simonsen
  0 siblings, 0 replies; 8+ messages in thread
From: Keld Jørn Simonsen @ 2008-05-11  7:39 UTC (permalink / raw)
  To: Jeff Garzik; +Cc: Justin Piszcz, David Lethe, linux-raid

On Sun, May 11, 2008 at 12:28:17AM +0200, Keld Jørn Simonsen wrote:
> On Sat, May 10, 2008 at 01:14:02PM -0400, Jeff Garzik wrote:
> > 
> > And of course, there are plenty of other factors to consider.  I wrote a 
> > bit on this topic at http://linux.yyz.us/why-software-raid.html
> 
> So that is you writing this page! I think it is one of the more useful
> pages on raid (many others are quite outdated). 
> 
> I have a suggestion for an advantage for SW raid, that you could
> consider for your page:
> 
> - potential for increased performance, due to more intelligent layouts,
>   for example the linux raid10,f2 has more than double the performance
>   for both sequential reads and random reads compared to most hardware
>   RAID1, and Linux SW RAID1.
> 
> - potential for better error handling, due to more intelligent drivers.
>   Linux raid10 has better error handling than linux RAID1 and possibly
>   also HW RAID1.
> 
> And then, if you could link to our HOWTO pages at
> http://linux-raid.osdl.org/ that would improve the visibility for our
> pages. It seems like not many are referencing our pages, so if people on
> this list would make a link to http://linux-raid.osdl.org/ that would
> increase the chance of our information to be seen.
> 
> Best regards
> keld

Another advantage of SW raid:

- SW RAID is more flexible. On a set of drives you can with SW RAID mix
  different types of raid, for example the /boot partition could be
  RAID1 so that it can be booted by grub/lilo, the / partition could be 
  raid10,f2 for greater read performance, and the data partition could be 
  raid5 to get more effective space out of your drives. In HW RAID you can
  only allocate whole disks to one RAID type.

best regards
Keld
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2008-05-11  7:39 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-05-10 14:06 Has anyone compared SWRAID for (JBOD@HW-RAID) vs. (Regular Sata PCI-e cards)? David Lethe
2008-05-10 14:15 ` Justin Piszcz
2008-05-10 17:14   ` Jeff Garzik
2008-05-10 22:28     ` Keld Jørn Simonsen
2008-05-11  7:39       ` Keld Jørn Simonsen
  -- strict thread matches above, loose matches on Subject: below --
2008-05-10 18:24 David Lethe
2008-05-10 14:47 David Lethe
2008-05-10  9:23 Justin Piszcz

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).