* Hardware versus Software
@ 2004-05-19 6:49 AndyLiebman
2004-05-19 19:00 ` Ricky Beam
` (2 more replies)
0 siblings, 3 replies; 11+ messages in thread
From: AndyLiebman @ 2004-05-19 6:49 UTC (permalink / raw)
To: linux-raid
I'm sure this has been discussed many times on the list but I was asked this
question today and I'm not sure how to respond:
Under Linux (i.e., a distribution such as Mandrake 10 -- with an up-to-date
2.6 kernel), is a "true hardware RAID-5" created through a SATA card such as
the 3ware 8506 or the new 3ware 9500 series any SAFER or MORE RELIABLE than a
software RAID created with the same card?
In my benchmark comparisons between software and hardware RAID, I get
significantly better performance using "software RAID". With eight 250 GB disks, I
get about 175 MB/sec reading and about 150 MB/sec writing when my system is
configured as software RAID. With hardware RAID the figures drop to about 125/100
MB/sec.
I'm only using my machines as file servers, so they aren't busy doing any
other tasks besides sending data packets through Gigabit Ethernet and running
disk I/O.
But the question still remains, is there any other safety and reliability
advantage to using Hardware? Is the data on a Hardware RAID more likely to remain
intact in the event of a computer crash or freeze?
Or in the event of an abrupt power failure (I have a UPS on the system, but
that could fail, or the power cables could be pulled out of the computer or
somebody could accidently shut it down). All of these power failure scenarios are
very unlikely, but they COULD occur.
Would Hardware RAID survive better than Software? I can't see why, but maybe
I'm missing something.
In my situation where the server isn't running any other software, the only
advantage I can see to hardware RAID is that rebuilding in the event of a disk
failure is a little easier for non-experts. But I'm writing a program to
automate the software RAID rebuilding process so that non-experts can do it
themselves.
Your informed comments would be appreciated.
Andy Liebman
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Hardware versus Software
2004-05-19 6:49 Hardware versus Software AndyLiebman
@ 2004-05-19 19:00 ` Ricky Beam
2004-05-19 20:03 ` Ming Zhang
2004-05-19 22:30 ` Ben Edwards
2004-05-20 2:00 ` dean gaudet
2 siblings, 1 reply; 11+ messages in thread
From: Ricky Beam @ 2004-05-19 19:00 UTC (permalink / raw)
To: AndyLiebman; +Cc: linux-raid
On Wed, 19 May 2004 AndyLiebman@aol.com wrote:
>But the question still remains, is there any other safety and reliability
>advantage to using Hardware? Is the data on a Hardware RAID more likely to
>remain intact in the event of a computer crash or freeze?
The linux software RAID is good stuff. It *can* be difficult to repair
an array in a few cases -- don't put the root FS in a non-mirrored array
and they shouldn't be a problem.
As you don't care about CPU cycles used by the array, you're far better
off using the software raid with normal drive controllers. (esp. true
for SATA.) Hardware RAID cards generally offer better managability
and stability -- the OS doesn't have to know if a drive fails, they're
designed to hot-swap drives with little or no fuss, etc. But, they will
be just a reliable as anything else.
For a system I never want to have to touch, I use hardware raid. For those
systems sitting at my feet, that isn't as important.
--Ricky
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Hardware versus Software
2004-05-19 19:00 ` Ricky Beam
@ 2004-05-19 20:03 ` Ming Zhang
2004-05-19 22:02 ` Ricky Beam
0 siblings, 1 reply; 11+ messages in thread
From: Ming Zhang @ 2004-05-19 20:03 UTC (permalink / raw)
To: Ricky Beam; +Cc: AndyLiebman, linux-raid
On Wed, 2004-05-19 at 15:00, Ricky Beam wrote:
> On Wed, 19 May 2004 AndyLiebman@aol.com wrote:
> >But the question still remains, is there any other safety and reliability
> >advantage to using Hardware? Is the data on a Hardware RAID more likely to
> >remain intact in the event of a computer crash or freeze?
>
> The linux software RAID is good stuff. It *can* be difficult to repair
> an array in a few cases -- don't put the root FS in a non-mirrored array
> and they shouldn't be a problem.
>
> As you don't care about CPU cycles used by the array, you're far better
> off using the software raid with normal drive controllers. (esp. true
> for SATA.) Hardware RAID cards generally offer better managability
> and stability -- the OS doesn't have to know if a drive fails, they're
> designed to hot-swap drives with little or no fuss, etc. But, they will
> be just a reliable as anything else.
is there any support from linux that can do this hot swappable a little
easier?
>
> For a system I never want to have to touch, I use hardware raid. For those
> systems sitting at my feet, that isn't as important.
>
> --Ricky
>
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
--------------------------------------------------
| Ming Zhang, PhD. Student
| Dept. of Electrical & Computer Engineering
| College of Engineering
| University of Rhode Island
| Kingston RI. 02881
| e-mail: mingz at ele.uri.edu
| Tel. (401) 874-2293
| Fax. (401) 782-6422
| http://www.ele.uri.edu/~mingz/
--------------------------------------------------
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Hardware versus Software
2004-05-19 20:03 ` Ming Zhang
@ 2004-05-19 22:02 ` Ricky Beam
2004-05-19 23:23 ` Ming Zhang
0 siblings, 1 reply; 11+ messages in thread
From: Ricky Beam @ 2004-05-19 22:02 UTC (permalink / raw)
To: Ming Zhang; +Cc: linux-raid
On Wed, 19 May 2004, Ming Zhang wrote:
>is there any support from linux that can do this hot swappable a little
>easier?
It's not the kernel's fault. Not entirely... SCSI will let you hot plug
with little trouble. IDE is a different problem.
The real problem is the system hardware. Aside from SATA, I don't know
of any MB that openly supports hot plugging of drives without some risk of
Real Trouble (tm).
--Ricky
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Hardware versus Software
2004-05-19 6:49 Hardware versus Software AndyLiebman
2004-05-19 19:00 ` Ricky Beam
@ 2004-05-19 22:30 ` Ben Edwards
2004-05-21 15:57 ` Ricky Beam
2004-05-20 2:00 ` dean gaudet
2 siblings, 1 reply; 11+ messages in thread
From: Ben Edwards @ 2004-05-19 22:30 UTC (permalink / raw)
To: AndyLiebman, Linux RAID List
AndyLiebman@aol.com wrote:
>I'm sure this has been discussed many times on the list but I was asked this
>question today and I'm not sure how to respond:
>
>Under Linux (i.e., a distribution such as Mandrake 10 -- with an up-to-date
>2.6 kernel), is a "true hardware RAID-5" created through a SATA card such as
>the 3ware 8506 or the new 3ware 9500 series any SAFER or MORE RELIABLE than a
>software RAID created with the same card?
>
>In my benchmark comparisons between software and hardware RAID, I get
>significantly better performance using "software RAID". With eight 250 GB disks, I
>get about 175 MB/sec reading and about 150 MB/sec writing when my system is
>configured as software RAID. With hardware RAID the figures drop to about 125/100
>MB/sec.
>
>I'm only using my machines as file servers, so they aren't busy doing any
>other tasks besides sending data packets through Gigabit Ethernet and running
>disk I/O.
>
>But the question still remains, is there any other safety and reliability
>advantage to using Hardware? Is the data on a Hardware RAID more likely to remain
>intact in the event of a computer crash or freeze?
>
>Or in the event of an abrupt power failure (I have a UPS on the system, but
>that could fail, or the power cables could be pulled out of the computer or
>somebody could accidently shut it down). All of these power failure scenarios are
>very unlikely, but they COULD occur.
>
>Would Hardware RAID survive better than Software? I can't see why, but maybe
>I'm missing something.
>
>In my situation where the server isn't running any other software, the only
>advantage I can see to hardware RAID is that rebuilding in the event of a disk
>failure is a little easier for non-experts. But I'm writing a program to
>automate the software RAID rebuilding process so that non-experts can do it
>themselves.
>
>Your informed comments would be appreciated.
>
>
This is VERY interesting. Douse the doumentation for software raid say
it is doing true Raid 5? I would be very suprised if software raid was
faster and if it is I wold think the card is either broke or rubish
(stop me at any time if me english gets too profesional). If what you
say is the case you should circulate this in other palces (slashdot,
debian-user etc.) as I am sure other would be very interested. This list
douse not seem to be that active so not sure if you will get mutch
response here. If you find anything out be sure to let the list know.
Ben
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Hardware versus Software
2004-05-19 22:02 ` Ricky Beam
@ 2004-05-19 23:23 ` Ming Zhang
0 siblings, 0 replies; 11+ messages in thread
From: Ming Zhang @ 2004-05-19 23:23 UTC (permalink / raw)
To: Ricky Beam; +Cc: linux-raid
So the question here is that if I want to use software raid with SATA,
is there any good hardware combination to support this hot plug? Thanks
a lot.
Ming
On Wed, 2004-05-19 at 18:02, Ricky Beam wrote:
> On Wed, 19 May 2004, Ming Zhang wrote:
> >is there any support from linux that can do this hot swappable a little
> >easier?
>
> It's not the kernel's fault. Not entirely... SCSI will let you hot plug
> with little trouble. IDE is a different problem.
>
> The real problem is the system hardware. Aside from SATA, I don't know
> of any MB that openly supports hot plugging of drives without some risk of
> Real Trouble (tm).
>
> --Ricky
--
--------------------------------------------------
| Ming Zhang, PhD. Student
| Dept. of Electrical & Computer Engineering
| College of Engineering
| University of Rhode Island
| Kingston RI. 02881
| e-mail: mingz at ele.uri.edu
| Tel. (401) 874-2293
| Fax. (401) 782-6422
| http://www.ele.uri.edu/~mingz/
--------------------------------------------------
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Hardware versus Software
2004-05-19 6:49 Hardware versus Software AndyLiebman
2004-05-19 19:00 ` Ricky Beam
2004-05-19 22:30 ` Ben Edwards
@ 2004-05-20 2:00 ` dean gaudet
2 siblings, 0 replies; 11+ messages in thread
From: dean gaudet @ 2004-05-20 2:00 UTC (permalink / raw)
To: AndyLiebman; +Cc: linux-raid
On Wed, 19 May 2004 AndyLiebman@aol.com wrote:
> Would Hardware RAID survive better than Software? I can't see why, but maybe
> I'm missing something.
there are reliability questions we can answer directly about software raid
because we have the source code. for hardware raid we have to trust the
vendors' claims.
i choose "the devil i know" -- sw raid. i've got a wishlist documenting
problems i've encountered: <http://arctic.org/~dean/raid-wishlist.html>.
some of those features may exist in hw raids -- but it's hard to get
answers on specifics like that.
-dean
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Hardware versus Software
2004-05-19 22:30 ` Ben Edwards
@ 2004-05-21 15:57 ` Ricky Beam
0 siblings, 0 replies; 11+ messages in thread
From: Ricky Beam @ 2004-05-21 15:57 UTC (permalink / raw)
To: Ben Edwards; +Cc: Linux RAID List
On Wed, 19 May 2004, Ben Edwards wrote:
>... I would be very suprised if software raid was
>faster and if it is I wold think the card is either broke or rubish
Negative. HW RAID cards may have specialized hardware for doing XOR
calculations. However, no raid card currently on the market has a 3.2GHz
xeon at it's core. Most are ARM (up to 750MHz?) or i960 (up to 66MHz) based.
The system CPU can certainly out perform the tiny processors on the raid
card. Of course, if it's doing raid calculations, it's not do general
computational work which is one reason to use hw raids.
--Ricky
^ permalink raw reply [flat|nested] 11+ messages in thread
* RE: Hardware versus Software
@ 2004-05-21 17:44 Salyzyn, Mark
2004-05-21 18:39 ` Guy
0 siblings, 1 reply; 11+ messages in thread
From: Salyzyn, Mark @ 2004-05-21 17:44 UTC (permalink / raw)
To: Ricky Beam, Ben Edwards; +Cc: Linux RAID List
CPU utilization is certainly one of the metrics we try our best to
optimize for. When one drops CPU utilization on a server, one can opt
for a cooler running or cheaper main processor.
The XOR engine, DMA engines and Cache are the three feature assists that
allow a less meaty (read, lower cost, lower heat) processor on the HW
Raid cards and yet offer some added performance or cost advantages. The
more appropriate processing power is necessary when trying to keep the
heat down in a 1U enclosure for instance. Keeping the number of
interrupts down to the main CPU, especially if it is not a class of CPU
designed for I/O throughput such as the x86 designs, will keep the
context switching and CPU utilization to a minimum.
OS independent Boot ability, Configuration tools, Install ability, Boot
Redundancy, Hardware based Hot Swapping and years of Intellectual
Property and Experience have to add up to something ;-> It is extremely
difficult to qualify a product and tidily placing `ownership' of RAID
problems into one engineering entity with strict revision and quality
control turns into one main reasons HW RAID still represents a viable
solution in the enterprise space.
EMD was born out of the needs of the OEM clashing with the needs of the
many ;-/
Attempts have been made to use lower cost neutered HW (ie no hot swap
for instance) or SW solutions in the Enterprise space, the results have
been both costly and devastating.
Linux stands in the Enterprise, Desktop and Embedded space; it behoves
the engineer to take each space's needs on their own merits. For
instance, I probably would not use a HW RAID controller in an embedded
application unless I could measure cost savings or performance
improvements on the Host Processor that offset the cost of the added HW.
But in the Enterprise space I would require one of the HW features and
it would be a no brainer. There are no simple answers regardless.
Sincerely -- Mark Salyzyn
-----Original Message-----
From: linux-raid-owner@vger.kernel.org
[mailto:linux-raid-owner@vger.kernel.org] On Behalf Of Ricky Beam
Sent: Friday, May 21, 2004 11:57 AM
To: Ben Edwards
Cc: Linux RAID List
Subject: Re: Hardware versus Software
On Wed, 19 May 2004, Ben Edwards wrote:
>... I would be very suprised if software raid was
>faster and if it is I wold think the card is either broke or rubish
Negative. HW RAID cards may have specialized hardware for doing XOR
calculations. However, no raid card currently on the market has a
3.2GHz
xeon at it's core. Most are ARM (up to 750MHz?) or i960 (up to 66MHz)
based.
The system CPU can certainly out perform the tiny processors on the raid
card. Of course, if it's doing raid calculations, it's not do general
computational work which is one reason to use hw raids.
--Ricky
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 11+ messages in thread
* RE: Hardware versus Software
2004-05-21 17:44 Salyzyn, Mark
@ 2004-05-21 18:39 ` Guy
0 siblings, 0 replies; 11+ messages in thread
From: Guy @ 2004-05-21 18:39 UTC (permalink / raw)
To: 'Salyzyn, Mark', 'Ricky Beam',
'Ben Edwards'
Cc: 'Linux RAID List'
From what I have seen, the XOR logic does not use much CPU. Less than 5% on
my 500Mhz P3 system during a rebuild of a 14 disk RAID5 array. Rebuild rate
of about 5000K/sec per disk, or 70MB/sec total. It would be hard to say
that a 5% load requires a CPU upgrade!
Guy
-----Original Message-----
From: linux-raid-owner@vger.kernel.org
[mailto:linux-raid-owner@vger.kernel.org] On Behalf Of Salyzyn, Mark
Sent: Friday, May 21, 2004 1:45 PM
To: Ricky Beam; Ben Edwards
Cc: Linux RAID List
Subject: RE: Hardware versus Software
CPU utilization is certainly one of the metrics we try our best to
optimize for. When one drops CPU utilization on a server, one can opt
for a cooler running or cheaper main processor.
The XOR engine, DMA engines and Cache are the three feature assists that
allow a less meaty (read, lower cost, lower heat) processor on the HW
Raid cards and yet offer some added performance or cost advantages. The
more appropriate processing power is necessary when trying to keep the
heat down in a 1U enclosure for instance. Keeping the number of
interrupts down to the main CPU, especially if it is not a class of CPU
designed for I/O throughput such as the x86 designs, will keep the
context switching and CPU utilization to a minimum.
OS independent Boot ability, Configuration tools, Install ability, Boot
Redundancy, Hardware based Hot Swapping and years of Intellectual
Property and Experience have to add up to something ;-> It is extremely
difficult to qualify a product and tidily placing `ownership' of RAID
problems into one engineering entity with strict revision and quality
control turns into one main reasons HW RAID still represents a viable
solution in the enterprise space.
EMD was born out of the needs of the OEM clashing with the needs of the
many ;-/
Attempts have been made to use lower cost neutered HW (ie no hot swap
for instance) or SW solutions in the Enterprise space, the results have
been both costly and devastating.
Linux stands in the Enterprise, Desktop and Embedded space; it behoves
the engineer to take each space's needs on their own merits. For
instance, I probably would not use a HW RAID controller in an embedded
application unless I could measure cost savings or performance
improvements on the Host Processor that offset the cost of the added HW.
But in the Enterprise space I would require one of the HW features and
it would be a no brainer. There are no simple answers regardless.
Sincerely -- Mark Salyzyn
-----Original Message-----
From: linux-raid-owner@vger.kernel.org
[mailto:linux-raid-owner@vger.kernel.org] On Behalf Of Ricky Beam
Sent: Friday, May 21, 2004 11:57 AM
To: Ben Edwards
Cc: Linux RAID List
Subject: Re: Hardware versus Software
On Wed, 19 May 2004, Ben Edwards wrote:
>... I would be very suprised if software raid was
>faster and if it is I wold think the card is either broke or rubish
Negative. HW RAID cards may have specialized hardware for doing XOR
calculations. However, no raid card currently on the market has a
3.2GHz
xeon at it's core. Most are ARM (up to 750MHz?) or i960 (up to 66MHz)
based.
The system CPU can certainly out perform the tiny processors on the raid
card. Of course, if it's doing raid calculations, it's not do general
computational work which is one reason to use hw raids.
--Ricky
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 11+ messages in thread
* RE: Hardware versus Software
@ 2004-05-21 18:52 Guy
0 siblings, 0 replies; 11+ messages in thread
From: Guy @ 2004-05-21 18:52 UTC (permalink / raw)
To: 'Salyzyn, Mark', 'Ricky Beam',
'Ben Edwards'
Cc: 'Linux RAID List'
I should have added...
There are other reasons for going to hardware raid.
Easy hot swap. Some even indicate the bad drive with an LED.
Disk surface testing.
Bad block relocation.
Parity verification.
Mirror verification.
Able to create logical disks that seem like physical disks to the OS.
50% less IO bandwidth usage on a RAID1 system.
I am sure there are others. But CPU load would not be a factor.
Not all hardware systems do all of the above.
Anyone else know how much CPU load you have during a rebuild?
Guy
-----Original Message-----
From: Guy [mailto:bugzilla@watkins-home.com]
Sent: Friday, May 21, 2004 2:39 PM
To: 'Salyzyn, Mark'; 'Ricky Beam'; 'Ben Edwards'
Cc: 'Linux RAID List'
Subject: RE: Hardware versus Software
From what I have seen, the XOR logic does not use much CPU. Less than 5% on
my 500Mhz P3 system during a rebuild of a 14 disk RAID5 array. Rebuild rate
of about 5000K/sec per disk, or 70MB/sec total. It would be hard to say
that a 5% load requires a CPU upgrade!
Guy
-----Original Message-----
From: linux-raid-owner@vger.kernel.org
[mailto:linux-raid-owner@vger.kernel.org] On Behalf Of Salyzyn, Mark
Sent: Friday, May 21, 2004 1:45 PM
To: Ricky Beam; Ben Edwards
Cc: Linux RAID List
Subject: RE: Hardware versus Software
CPU utilization is certainly one of the metrics we try our best to
optimize for. When one drops CPU utilization on a server, one can opt
for a cooler running or cheaper main processor.
The XOR engine, DMA engines and Cache are the three feature assists that
allow a less meaty (read, lower cost, lower heat) processor on the HW
Raid cards and yet offer some added performance or cost advantages. The
more appropriate processing power is necessary when trying to keep the
heat down in a 1U enclosure for instance. Keeping the number of
interrupts down to the main CPU, especially if it is not a class of CPU
designed for I/O throughput such as the x86 designs, will keep the
context switching and CPU utilization to a minimum.
OS independent Boot ability, Configuration tools, Install ability, Boot
Redundancy, Hardware based Hot Swapping and years of Intellectual
Property and Experience have to add up to something ;-> It is extremely
difficult to qualify a product and tidily placing `ownership' of RAID
problems into one engineering entity with strict revision and quality
control turns into one main reasons HW RAID still represents a viable
solution in the enterprise space.
EMD was born out of the needs of the OEM clashing with the needs of the
many ;-/
Attempts have been made to use lower cost neutered HW (ie no hot swap
for instance) or SW solutions in the Enterprise space, the results have
been both costly and devastating.
Linux stands in the Enterprise, Desktop and Embedded space; it behoves
the engineer to take each space's needs on their own merits. For
instance, I probably would not use a HW RAID controller in an embedded
application unless I could measure cost savings or performance
improvements on the Host Processor that offset the cost of the added HW.
But in the Enterprise space I would require one of the HW features and
it would be a no brainer. There are no simple answers regardless.
Sincerely -- Mark Salyzyn
-----Original Message-----
From: linux-raid-owner@vger.kernel.org
[mailto:linux-raid-owner@vger.kernel.org] On Behalf Of Ricky Beam
Sent: Friday, May 21, 2004 11:57 AM
To: Ben Edwards
Cc: Linux RAID List
Subject: Re: Hardware versus Software
On Wed, 19 May 2004, Ben Edwards wrote:
>... I would be very suprised if software raid was
>faster and if it is I wold think the card is either broke or rubish
Negative. HW RAID cards may have specialized hardware for doing XOR
calculations. However, no raid card currently on the market has a
3.2GHz
xeon at it's core. Most are ARM (up to 750MHz?) or i960 (up to 66MHz)
based.
The system CPU can certainly out perform the tiny processors on the raid
card. Of course, if it's doing raid calculations, it's not do general
computational work which is one reason to use hw raids.
--Ricky
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2004-05-21 18:52 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-05-19 6:49 Hardware versus Software AndyLiebman
2004-05-19 19:00 ` Ricky Beam
2004-05-19 20:03 ` Ming Zhang
2004-05-19 22:02 ` Ricky Beam
2004-05-19 23:23 ` Ming Zhang
2004-05-19 22:30 ` Ben Edwards
2004-05-21 15:57 ` Ricky Beam
2004-05-20 2:00 ` dean gaudet
-- strict thread matches above, loose matches on Subject: below --
2004-05-21 17:44 Salyzyn, Mark
2004-05-21 18:39 ` Guy
2004-05-21 18:52 Guy
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).