* 3ware escalade vs software raid, from a different jeff
@ 2004-02-18 21:48 Rev. Jeffrey Paul
2004-02-18 22:58 ` Jeff Garzik
2004-02-19 11:49 ` Holger Kiehl
0 siblings, 2 replies; 23+ messages in thread
From: Rev. Jeffrey Paul @ 2004-02-18 21:48 UTC (permalink / raw)
To: linux-raid
I'm building an nfs server to export home directories to a few login
servers used somewhat heavily by a few hundred people. Additionally, on
the array will be a bunch of web content and mysql data files.
I'm currently thinking of a few different configurations, all using four
drives in the 180-250gb range. I could get sata or udma drives, i could
go raid5 or raid10, and i could do this in linux software raid (using,
say, a pair of promise ultra100s to give me four udma ports, or with an
SATA card), using 3ware's UDMA four-port escalade, or with 3ware's new
sata escalade.
What are the compelling reasons for using sata over udma? I've had really
good experiences with linux software raid, but always in small workgroup
or personal file servers. I'm wondering if this sort of big, random,
heavy load will drag down an otherwise speedy box. Is it worth it to just
drop the extra $300 and go with hardware raid? as quality as linux
software raid is, i'm somehow still more comfortable just leaving it to
hardware.
what sort of ideas or experiences can you guys share?
i'm going for cost first (i've got a $1k budget or so), reliability and
stability second, and speed third.
-j
--
--------------------------------------------------------
Rev. Jeffrey Paul -datavibe- sneak@datavibe.net
aim:x736e65616b pgp:0x15FA257E phone:8777483467
70E0 B896 D5F3 8BF4 4BEE 2CCF EF2F BA28 15FA 257E
--------------------------------------------------------
^ permalink raw reply [flat|nested] 23+ messages in thread* Re: 3ware escalade vs software raid, from a different jeff
2004-02-18 21:48 3ware escalade vs software raid, from a different jeff Rev. Jeffrey Paul
@ 2004-02-18 22:58 ` Jeff Garzik
2004-02-19 11:49 ` Holger Kiehl
1 sibling, 0 replies; 23+ messages in thread
From: Jeff Garzik @ 2004-02-18 22:58 UTC (permalink / raw)
To: Rev. Jeffrey Paul; +Cc: linux-raid
Rev. Jeffrey Paul wrote:
> I'm building an nfs server to export home directories to a few login
> servers used somewhat heavily by a few hundred people. Additionally, on
> the array will be a bunch of web content and mysql data files.
>
> I'm currently thinking of a few different configurations, all using four
> drives in the 180-250gb range. I could get sata or udma drives, i could
> go raid5 or raid10, and i could do this in linux software raid (using,
> say, a pair of promise ultra100s to give me four udma ports, or with an
> SATA card), using 3ware's UDMA four-port escalade, or with 3ware's new
> sata escalade.
3ware hardware is really nice, so that would be a good choice.
But... I am a bit biased: I almost always prefer software RAID over
hardware RAID. Much more flexibility to move disks between systems,
choose controller(s) you like, etc.
In theory performance of hardware RAID can be better than software RAID.
In practice, only a few hardware RAID controllers actually achieve this.
Jeff
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: 3ware escalade vs software raid, from a different jeff
2004-02-18 21:48 3ware escalade vs software raid, from a different jeff Rev. Jeffrey Paul
2004-02-18 22:58 ` Jeff Garzik
@ 2004-02-19 11:49 ` Holger Kiehl
2004-02-19 12:06 ` Joshua Baker-LePain
2004-02-19 12:11 ` Måns Rullgård
1 sibling, 2 replies; 23+ messages in thread
From: Holger Kiehl @ 2004-02-19 11:49 UTC (permalink / raw)
To: Rev. Jeffrey Paul; +Cc: linux-raid
On Wed, 18 Feb 2004, Rev. Jeffrey Paul wrote:
>
> I'm building an nfs server to export home directories to a few login
> servers used somewhat heavily by a few hundred people. Additionally, on
> the array will be a bunch of web content and mysql data files.
>
> I'm currently thinking of a few different configurations, all using four
> drives in the 180-250gb range. I could get sata or udma drives, i could
> go raid5 or raid10, and i could do this in linux software raid (using,
> say, a pair of promise ultra100s to give me four udma ports, or with an
> SATA card), using 3ware's UDMA four-port escalade, or with 3ware's new
> sata escalade.
>
> What are the compelling reasons for using sata over udma?
>
For now none, really. Only that in future it will be more and more difficult
to get UDMA drives. But this might take a long time before this really
becomes a problem. SATA II will bring a real advantage.
> I've had really good experiences with linux software raid, but always
> in small workgroup or personal file servers. I'm wondering if this sort
> of big, random, heavy load will drag down an otherwise speedy box.
>
I have a system running for nearly three years distributing some 2.3 million
files with 200GB daily. This is with linux software raid and have encountered
absolutly no problems. During the same period another system (not linux) with
a similar workload but with hardware raid has failed twice, once makeing
all data useless.
> Is it worth it to just drop the extra $300 and go with hardware raid?
>
In my opinion no. Most of the cheaper hardware raids are really just
software raid solutions. For the more expansive once always remember
you will always need drivers and there is no guarentee that you
will get them in two or three years. Some vendors do no longer exist
or no longer support that product.
Also don't worry about the slightly higher CPU usage of software raid.
The time spend by most application is by waiting for IO from disk/raid,
so any time gained here (software raid is faster then hardware raid) will
easily make up the lost CPU time.
Regards,
Holger
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: 3ware escalade vs software raid, from a different jeff
2004-02-19 11:49 ` Holger Kiehl
@ 2004-02-19 12:06 ` Joshua Baker-LePain
2004-02-19 17:56 ` Rev. Jeffrey Paul
2004-02-19 12:11 ` Måns Rullgård
1 sibling, 1 reply; 23+ messages in thread
From: Joshua Baker-LePain @ 2004-02-19 12:06 UTC (permalink / raw)
To: Holger Kiehl; +Cc: Rev. Jeffrey Paul, linux-raid
On Thu, 19 Feb 2004 at 11:49am, Holger Kiehl wrote
> In my opinion no. Most of the cheaper hardware raids are really just
> software raid solutions. For the more expansive once always remember
> you will always need drivers and there is no guarentee that you
> will get them in two or three years. Some vendors do no longer exist
> or no longer support that product.
I fully agree on avoiding cheap "hardware" RAID cards. Regarding drivers
going away, though, the 3ware drivers are open source and have been in the
kernel for a long time. If 3ware were to disappear (unlikely), I'd be
willing to bet that the drivers stay supported by the community.
--
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: 3ware escalade vs software raid, from a different jeff
2004-02-19 12:06 ` Joshua Baker-LePain
@ 2004-02-19 17:56 ` Rev. Jeffrey Paul
2004-02-19 19:58 ` Michael
2004-02-20 0:38 ` Jeff Garzik
0 siblings, 2 replies; 23+ messages in thread
From: Rev. Jeffrey Paul @ 2004-02-19 17:56 UTC (permalink / raw)
To: Joshua Baker-LePain; +Cc: linux-raid
On Thu, Feb 19, 2004 at 07:06:32AM -0500, Joshua Baker-LePain wrote:
>
> I fully agree on avoiding cheap "hardware" RAID cards. Regarding drivers
> going away, though, the 3ware drivers are open source and have been in the
> kernel for a long time. If 3ware were to disappear (unlikely), I'd be
> willing to bet that the drivers stay supported by the community.
>
I have always avoided the promise fasttrak cards for exactly that
reason ("hardware" being in quotes). It's my understanding that
they're not much more than their udma cards with drivers that do most
of the RAIDing.
It was also my understanding that the 3ware cards are true hardware
raid and are up to the task of something like this, which would also
explain why they're 4x the cost.
I am looking at a four disk raid5 or raid10, and it seems like
the interrupt load from four drives on four channels might be a bit
excesive. I'm going to be running critical services on the machine
that the drives are in (namely mysql and nfs) and don't want to worry
about performance.
Am I thinking about this the wrong way?
-j
--
--------------------------------------------------------
Rev. Jeffrey Paul -datavibe- sneak@datavibe.net
aim:x736e65616b pgp:0x15FA257E phone:8777483467
70E0 B896 D5F3 8BF4 4BEE 2CCF EF2F BA28 15FA 257E
--------------------------------------------------------
^ permalink raw reply [flat|nested] 23+ messages in thread* Re: 3ware escalade vs software raid, from a different jeff
2004-02-19 17:56 ` Rev. Jeffrey Paul
@ 2004-02-19 19:58 ` Michael
2004-02-19 20:18 ` Ricky Beam
` (2 more replies)
2004-02-20 0:38 ` Jeff Garzik
1 sibling, 3 replies; 23+ messages in thread
From: Michael @ 2004-02-19 19:58 UTC (permalink / raw)
To: linux-raid
> On Thu, Feb 19, 2004 at 07:06:32AM -0500, Joshua Baker-LePain wrote:
> >
> > I fully agree on avoiding cheap "hardware" RAID cards. Regarding drivers
> > going away, though, the 3ware drivers are open source and have been in the
> > kernel for a long time. If 3ware were to disappear (unlikely), I'd be
> > willing to bet that the drivers stay supported by the community.
> >
>
> I have always avoided the promise fasttrak cards for exactly that
> reason ("hardware" being in quotes). It's my understanding that
> they're not much more than their udma cards with drivers that do
> most of the RAIDing.
>
> It was also my understanding that the 3ware cards are true hardware
> raid and are up to the task of something like this, which would also
> explain why they're 4x the cost.
>
Bear in mind that what you are calling "true hardware raid" is really
a microprocessor programmed to do the raid algorithims. Usually these
microprocessors are stretched to the limit to handle the throughput
of modern udam drives. I don't know but I suspect that the small
overhead use in the mmu for software raid has far more and faster
throughput than any of these dedicated microprocessors..... and, you
can see the code and know it is bug free or will be if you report the
bug. I am the unhappy owner of several Adaptec raid cards that have
onboard processors to handle not only raid, but command processing
for the scsi bus. These turkeys have micro-code bugs that cause a
variety of problem for which there is no workaround or solution other
than trashing the cards. Don't get me wrong, I thing the 3ware
product is exceptionally good, I just wouldn't use the raid code
given the choice of linux software raid.
Currently running 10 linux software raid boxes -- mix of raid 1 and
raid 5. Yes, I'm biased :-)
Michael
Michael@Insulin-Pumpers.org
^ permalink raw reply [flat|nested] 23+ messages in thread* Re: 3ware escalade vs software raid, from a different jeff
2004-02-19 19:58 ` Michael
@ 2004-02-19 20:18 ` Ricky Beam
2004-02-19 22:04 ` Scsi adapters and software raid Bob Hillegas
2004-02-19 22:33 ` 3ware escalade vs software raid, from a different jeff Scott Long
2004-02-19 22:29 ` Scott Long
2004-02-20 4:26 ` Jeff Garzik
2 siblings, 2 replies; 23+ messages in thread
From: Ricky Beam @ 2004-02-19 20:18 UTC (permalink / raw)
To: Michael; +Cc: linux-raid
On Thu, 19 Feb 2004, Michael wrote:
>Bear in mind that what you are calling "true hardware raid" is really
>a microprocessor programmed to do the raid algorithims.
3ware has custom designed matrix switch chips to handle each IDE drive.
This *alone* is worth the cost of the card. The RAID parity calculations
are also done in hardware (not a bunch of CPU xor's.) You'd be surprised how
much data a small "slow" processor can move when properly programmed.
>These turkeys have micro-code bugs that cause a
>variety of problem for which there is no workaround or solution other
>than trashing the cards.
Exactly what bugs? I've used a number of hardware SCSI RAID cards (none
by Adaptec however -- some that are now owned by Adapatec) and all of them
work properly -- as long as they aren't physically damaged (I had one with
a bad DIMM)
--Ricky
^ permalink raw reply [flat|nested] 23+ messages in thread
* Scsi adapters and software raid...
2004-02-19 20:18 ` Ricky Beam
@ 2004-02-19 22:04 ` Bob Hillegas
2004-02-20 0:33 ` Kanoa Withington
2004-02-20 4:27 ` Jeff Garzik
2004-02-19 22:33 ` 3ware escalade vs software raid, from a different jeff Scott Long
1 sibling, 2 replies; 23+ messages in thread
From: Bob Hillegas @ 2004-02-19 22:04 UTC (permalink / raw)
To: linux-raid
It's been interesting reading these comments about adapters to avoid.
My question is... what SCSI adapter works WELL with software raid
(mdadm) and doesn't get in the way?
Thanks, BobH
On Thu, 2004-02-19 at 14:18, Ricky Beam wrote:
> On Thu, 19 Feb 2004, Michael wrote:
> >Bear in mind that what you are calling "true hardware raid" is really
> >a microprocessor programmed to do the raid algorithims.
>
> 3ware has custom designed matrix switch chips to handle each IDE drive.
> This *alone* is worth the cost of the card. The RAID parity calculations
> are also done in hardware (not a bunch of CPU xor's.) You'd be surprised how
> much data a small "slow" processor can move when properly programmed.
>
> >These turkeys have micro-code bugs that cause a
> >variety of problem for which there is no workaround or solution other
> >than trashing the cards.
>
> Exactly what bugs? I've used a number of hardware SCSI RAID cards (none
> by Adaptec however -- some that are now owned by Adapatec) and all of them
> work properly -- as long as they aren't physically damaged (I had one with
> a bad DIMM)
>
> --Ricky
>
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Bob Hillegas <bobh@south.rosestar.lan>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: Scsi adapters and software raid...
2004-02-19 22:04 ` Scsi adapters and software raid Bob Hillegas
@ 2004-02-20 0:33 ` Kanoa Withington
2004-02-20 4:27 ` Jeff Garzik
1 sibling, 0 replies; 23+ messages in thread
From: Kanoa Withington @ 2004-02-20 0:33 UTC (permalink / raw)
To: Bob Hillegas; +Cc: linux-raid
Bob,
All the Adaptec non-RAID SCSI controllers work nicely. The ones based
on the 79xx and newer chips require the newer kernel module. The 78xx
and older ones are all supported by the old module. They are generally
stable, compatible and offer good performance.
Their SCSI _hardware_ RAID controllers are a different story. Steer
clear of those.
-Kanoa
On Thu, 19 Feb 2004, Bob Hillegas wrote:
> It's been interesting reading these comments about adapters to avoid.
>
> My question is... what SCSI adapter works WELL with software raid
> (mdadm) and doesn't get in the way?
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: Scsi adapters and software raid...
2004-02-19 22:04 ` Scsi adapters and software raid Bob Hillegas
2004-02-20 0:33 ` Kanoa Withington
@ 2004-02-20 4:27 ` Jeff Garzik
1 sibling, 0 replies; 23+ messages in thread
From: Jeff Garzik @ 2004-02-20 4:27 UTC (permalink / raw)
To: Bob Hillegas; +Cc: linux-raid
Bob Hillegas wrote:
> It's been interesting reading these comments about adapters to avoid.
>
> My question is... what SCSI adapter works WELL with software raid
> (mdadm) and doesn't get in the way?
For parallel SCSI?
IMO Adaptec is really the only big vendor in the game anymore. I think
QLogic dumped their parallel SCSI hardware (correct me if I'm wrong),
even though they still support their parallel SCSI driver.
Personally, though, I would skip SCSI and go straight to Serial ATA :)
Jeff
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: 3ware escalade vs software raid, from a different jeff
2004-02-19 20:18 ` Ricky Beam
2004-02-19 22:04 ` Scsi adapters and software raid Bob Hillegas
@ 2004-02-19 22:33 ` Scott Long
2004-02-19 23:52 ` Guy
1 sibling, 1 reply; 23+ messages in thread
From: Scott Long @ 2004-02-19 22:33 UTC (permalink / raw)
To: Ricky Beam; +Cc: Michael, linux-raid
Ricky Beam wrote:
> On Thu, 19 Feb 2004, Michael wrote:
> >Bear in mind that what you are calling "true hardware raid" is really
> >a microprocessor programmed to do the raid algorithims.
>
> 3ware has custom designed matrix switch chips to handle each IDE drive.
> This *alone* is worth the cost of the card. The RAID parity calculations
> are also done in hardware (not a bunch of CPU xor's.) You'd be
> surprised how
> much data a small "slow" processor can move when properly programmed.
>
Are you talking about host-processor XOR or adapter card XOR? The i960
processor used in practically every RAID card known to man right now has
an XOR engine in the memory controller. It's not quite as efficient as
some higher-end implementations, but it's a heck of a lot better than
having that i960 core spin through an XOR software loop.
Scott
^ permalink raw reply [flat|nested] 23+ messages in thread
* RE: 3ware escalade vs software raid, from a different jeff
2004-02-19 22:33 ` 3ware escalade vs software raid, from a different jeff Scott Long
@ 2004-02-19 23:52 ` Guy
0 siblings, 0 replies; 23+ messages in thread
From: Guy @ 2004-02-19 23:52 UTC (permalink / raw)
To: 'Scott Long', 'Ricky Beam'; +Cc: 'Michael', linux-raid
I have 14 18Gig disks in a RAID5. When it re-builds I get just over
6Meg/sec. That's 6M per disk, or 84Meg total I/O per second. The CPU load
is less than 5%. My system is a P3-500. If I had a real computer I would
not be able to measure the CPU usage, it would be too low! :)
I have a 2.4 Kernel.
My SCSI buses are the limiting factor.
40MB bus with 6 disks, 80MB with 7, 80MB with 1.
I have only used a few hardware RAID cards, they are not this fast!
-----Original Message-----
From: linux-raid-owner@vger.kernel.org
[mailto:linux-raid-owner@vger.kernel.org] On Behalf Of Scott Long
Sent: Thursday, February 19, 2004 5:34 PM
To: Ricky Beam
Cc: Michael; linux-raid@vger.kernel.org
Subject: Re: 3ware escalade vs software raid, from a different jeff
Ricky Beam wrote:
> On Thu, 19 Feb 2004, Michael wrote:
> >Bear in mind that what you are calling "true hardware raid" is really
> >a microprocessor programmed to do the raid algorithims.
>
> 3ware has custom designed matrix switch chips to handle each IDE drive.
> This *alone* is worth the cost of the card. The RAID parity calculations
> are also done in hardware (not a bunch of CPU xor's.) You'd be
> surprised how
> much data a small "slow" processor can move when properly programmed.
>
Are you talking about host-processor XOR or adapter card XOR? The i960
processor used in practically every RAID card known to man right now has
an XOR engine in the memory controller. It's not quite as efficient as
some higher-end implementations, but it's a heck of a lot better than
having that i960 core spin through an XOR software loop.
Scott
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: 3ware escalade vs software raid, from a different jeff
2004-02-19 19:58 ` Michael
2004-02-19 20:18 ` Ricky Beam
@ 2004-02-19 22:29 ` Scott Long
2004-02-20 4:26 ` Jeff Garzik
2 siblings, 0 replies; 23+ messages in thread
From: Scott Long @ 2004-02-19 22:29 UTC (permalink / raw)
To: michael; +Cc: linux-raid
Michael wrote:
>
> Bear in mind that what you are calling "true hardware raid" is really
> a microprocessor programmed to do the raid algorithims. Usually these
> microprocessors are stretched to the limit to handle the throughput
> of modern udam drives. I don't know but I suspect that the small
> overhead use in the mmu for software raid has far more and faster
> throughput than any of these dedicated microprocessors..... and, you
> can see the code and know it is bug free or will be if you report the
> bug. I am the unhappy owner of several Adaptec raid cards that have
> onboard processors to handle not only raid, but command processing
> for the scsi bus.
What exactly are you talking about here? Are you using a multi-channel
RAID card to do RAID on one channel and SCSI-passthru on the other?
Please explain.
> These turkeys have micro-code bugs that cause a
> variety of problem for which there is no workaround or solution other
> than trashing the cards.
What 'micro-code' bugs are you talking about? What problems are you
talking about. If you could provide some details to back up these
claims, there might be some recourse.
> Don't get me wrong, I thing the 3ware
> product is exceptionally good, I just wouldn't use the raid code
> given the choice of linux software raid.
>
> Currently running 10 linux software raid boxes -- mix of raid 1 and
> raid 5. Yes, I'm biased :-)
>
> Michael
> Michael@Insulin-Pumpers.org
Scott
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: 3ware escalade vs software raid, from a different jeff
2004-02-19 19:58 ` Michael
2004-02-19 20:18 ` Ricky Beam
2004-02-19 22:29 ` Scott Long
@ 2004-02-20 4:26 ` Jeff Garzik
2004-02-20 4:40 ` Scott Long
2 siblings, 1 reply; 23+ messages in thread
From: Jeff Garzik @ 2004-02-20 4:26 UTC (permalink / raw)
To: michael; +Cc: linux-raid
Michael wrote:
> Bear in mind that what you are calling "true hardware raid" is really
> a microprocessor programmed to do the raid algorithims. Usually these
> microprocessors are stretched to the limit to handle the throughput
> of modern udam drives. I don't know but I suspect that the small
> overhead use in the mmu for software raid has far more and faster
> throughput than any of these dedicated microprocessors..... and, you
> can see the code and know it is bug free or will be if you report the
Nod... many hardware RAIDs are turning out this way. A certain vendor
whose name does -not- start with 'A' manages to make their hardware RAID
perform so poorly, it is _half_ the speed of a software RAID using the
same drives, on a single non-RAID Adaptec SCSI controller.
Most hardware RAID isn't 100% ASIC, but rather a general ASIC and
firmware with the RAID code in it.
OTOH, hardware RAID really wins for situations like RAID-1, where you
can -halve- the amount of data going across the PCI bus versus software
RAID.
Jeff
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: 3ware escalade vs software raid, from a different jeff
2004-02-20 4:26 ` Jeff Garzik
@ 2004-02-20 4:40 ` Scott Long
0 siblings, 0 replies; 23+ messages in thread
From: Scott Long @ 2004-02-20 4:40 UTC (permalink / raw)
To: Jeff Garzik; +Cc: michael, linux-raid
Jeff Garzik wrote:
> Michael wrote:
> > Bear in mind that what you are calling "true hardware raid" is really
> > a microprocessor programmed to do the raid algorithims. Usually these
> > microprocessors are stretched to the limit to handle the throughput
> > of modern udam drives. I don't know but I suspect that the small
> > overhead use in the mmu for software raid has far more and faster
> > throughput than any of these dedicated microprocessors..... and, you
> > can see the code and know it is bug free or will be if you report the
>
> Nod... many hardware RAIDs are turning out this way. A certain vendor
> whose name does -not- start with 'A' manages to make their hardware RAID
> perform so poorly, it is _half_ the speed of a software RAID using the
> same drives, on a single non-RAID Adaptec SCSI controller.
>
Whoever could that be????
> Most hardware RAID isn't 100% ASIC, but rather a general ASIC and
> firmware with the RAID code in it.
>
Well, the 8030x (i960) chips are really tailored for RAID. That's why
they have a PCI-PCI bridge, XOR accelerator, and DRAM controller built
into the package.
> OTOH, hardware RAID really wins for situations like RAID-1, where you
> can -halve- the amount of data going across the PCI bus versus software
> RAID.
If performance is how you measure the quality of RAID, then you can't
forget the benefit of a large write cache on the controller and the
intelligence in the raid stack to use it well. However, the real
benefit to hardware-accelerated RAID is the ability to guarantee that
your reads and writes always succeed, regardless of the failure thrown
at it. Of course, the number of cards out there that actually meet
this goal is quite small, even from companies whose names start with
'A'.
Scott
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: 3ware escalade vs software raid, from a different jeff
2004-02-19 17:56 ` Rev. Jeffrey Paul
2004-02-19 19:58 ` Michael
@ 2004-02-20 0:38 ` Jeff Garzik
2004-02-20 7:19 ` Joshua Baker-LePain
1 sibling, 1 reply; 23+ messages in thread
From: Jeff Garzik @ 2004-02-20 0:38 UTC (permalink / raw)
To: Rev. Jeffrey Paul; +Cc: Joshua Baker-LePain, linux-raid
Rev. Jeffrey Paul wrote:
> I have always avoided the promise fasttrak cards for exactly that
> reason ("hardware" being in quotes). It's my understanding that
> they're not much more than their udma cards with drivers that do most
> of the RAIDing.
Actually, Promise is one of the very few companies that are doing
something innovative in RAID... Their hardware is not 100% software
RAID nor 100% hardware RAID. They instead follow the model of network
cards -- perform all key operations on the board, and let the host CPU
handle the rest.
> It was also my understanding that the 3ware cards are true hardware
> raid and are up to the task of something like this, which would also
> explain why they're 4x the cost.
Correct.
> I am looking at a four disk raid5 or raid10, and it seems like
> the interrupt load from four drives on four channels might be a bit
> excesive. I'm going to be running critical services on the machine
> that the drives are in (namely mysql and nfs) and don't want to worry
> about performance.
Once your spindles can max out your PCI bus bandwidth, -then- you can
start worrying about PCI bandwidth and interrupt load ;-)
Jeff
^ permalink raw reply [flat|nested] 23+ messages in thread* Re: 3ware escalade vs software raid, from a different jeff
2004-02-20 0:38 ` Jeff Garzik
@ 2004-02-20 7:19 ` Joshua Baker-LePain
0 siblings, 0 replies; 23+ messages in thread
From: Joshua Baker-LePain @ 2004-02-20 7:19 UTC (permalink / raw)
To: Jeff Garzik; +Cc: Rev. Jeffrey Paul, linux-raid
On Thu, 19 Feb 2004 at 7:38pm, Jeff Garzik wrote
> Rev. Jeffrey Paul wrote:
> > I am looking at a four disk raid5 or raid10, and it seems like
> > the interrupt load from four drives on four channels might be a bit
> > excesive. I'm going to be running critical services on the machine
> > that the drives are in (namely mysql and nfs) and don't want to worry
> > about performance.
>
> Once your spindles can max out your PCI bus bandwidth, -then- you can
> start worrying about PCI bandwidth and interrupt load ;-)
With some boards, it's actually pretty easy to do that. ;) Not with 4
disks, as the OP is doing, but 8 disks on one 64bit/33MHz bus (3ware
7500-8) can start to bump up against the PCI bandwidth. I've got a couple
servers with 2 3wares and 16 disks, and I need to put the 3wares on
separate PCI busses to make sure I get the full speed out of the combined
array (hardware RAID5, software RAID0).
--
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: 3ware escalade vs software raid, from a different jeff
2004-02-19 11:49 ` Holger Kiehl
2004-02-19 12:06 ` Joshua Baker-LePain
@ 2004-02-19 12:11 ` Måns Rullgård
2004-02-19 12:32 ` Holger Kiehl
2004-02-19 12:32 ` Jeff Garzik
1 sibling, 2 replies; 23+ messages in thread
From: Måns Rullgård @ 2004-02-19 12:11 UTC (permalink / raw)
To: linux-raid
Holger Kiehl <Holger.Kiehl@dwd.de> writes:
> In my opinion no. Most of the cheaper hardware raids are really just
> software raid solutions. For the more expansive once always remember
> you will always need drivers and there is no guarentee that you
> will get them in two or three years. Some vendors do no longer exist
> or no longer support that product.
Wasn't there some talk recently about standardizing the on-disk
format for both hardware and software RAID?
--
Måns Rullgård
mru@kth.se
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: 3ware escalade vs software raid, from a different jeff
2004-02-19 12:11 ` Måns Rullgård
@ 2004-02-19 12:32 ` Holger Kiehl
2004-02-19 12:32 ` Jeff Garzik
1 sibling, 0 replies; 23+ messages in thread
From: Holger Kiehl @ 2004-02-19 12:32 UTC (permalink / raw)
To: linux-raid
On Thu, 19 Feb 2004, Måns Rullgård wrote:
> Holger Kiehl <Holger.Kiehl@dwd.de> writes:
>
> > In my opinion no. Most of the cheaper hardware raids are really just
> > software raid solutions. For the more expansive once always remember
> > you will always need drivers and there is no guarentee that you
> > will get them in two or three years. Some vendors do no longer exist
> > or no longer support that product.
>
> Wasn't there some talk recently about standardizing the on-disk
> format for both hardware and software RAID?
>
Correct, I read this with great interest and hope that all vendors
do agree on a standart.
Holger
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: 3ware escalade vs software raid, from a different jeff
2004-02-19 12:11 ` Måns Rullgård
2004-02-19 12:32 ` Holger Kiehl
@ 2004-02-19 12:32 ` Jeff Garzik
1 sibling, 0 replies; 23+ messages in thread
From: Jeff Garzik @ 2004-02-19 12:32 UTC (permalink / raw)
To: Måns Rullgård; +Cc: linux-raid
Måns Rullgård wrote:
> Holger Kiehl <Holger.Kiehl@dwd.de> writes:
>
>
>>In my opinion no. Most of the cheaper hardware raids are really just
>>software raid solutions. For the more expansive once always remember
>>you will always need drivers and there is no guarentee that you
>>will get them in two or three years. Some vendors do no longer exist
>>or no longer support that product.
>
>
> Wasn't there some talk recently about standardizing the on-disk
> format for both hardware and software RAID?
It's happening.
"SNIA" has created an on-disk format "DDF", which both software and
hardware RAID vendors have pretty much all agreed to use.
Of course, the spec is not public yet, so who knows what they have
agreed to...
Jeff
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: 3ware escalade vs software raid, from a different jeff
@ 2004-02-23 22:23 Jeff Gray
2004-02-24 8:06 ` Holger Kiehl
2004-02-24 14:16 ` Joshua Baker-LePain
0 siblings, 2 replies; 23+ messages in thread
From: Jeff Gray @ 2004-02-23 22:23 UTC (permalink / raw)
To: Holger.Kiehl; +Cc: linux-raid
Greetings Holger,
>From: Holger Kiehl <Holger.Kiehl@dwd.de>
>Subject: Re: 3ware escalade vs software raid, from a different jeff
>Date: Thu, 19 Feb 2004 11:49:09 +0000 (GMT)
>I have a system running for nearly three years distributing some 2.3
>million
>files with 200GB daily. This is with linux software raid and have
>encountered
>absolutly no problems. During the same period another system (not linux)
>with
>a similar workload but with hardware raid has failed twice, once makeing
>all data useless.
I am curious as to which filesystem you are using on that server. I've asked
questions on other
mailing lists before regarding journaling filesystems but it's always
interesting to see how people
are using them in real-life scenarios. Currently I'm trying to choose
between Reiser and XFS.
Kind Regards,
Jeff Gray: the "other" Jeff ;)
_________________________________________________________________
Dream of owning a home? Find out how in the First-time Home Buying Guide.
http://special.msn.com/home/firsthome.armx
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: 3ware escalade vs software raid, from a different jeff
2004-02-23 22:23 Jeff Gray
@ 2004-02-24 8:06 ` Holger Kiehl
2004-02-24 14:16 ` Joshua Baker-LePain
1 sibling, 0 replies; 23+ messages in thread
From: Holger Kiehl @ 2004-02-24 8:06 UTC (permalink / raw)
To: Jeff Gray; +Cc: linux-raid
Hello Jeff
On Mon, 23 Feb 2004, Jeff Gray wrote:
> Greetings Holger,
>
> >From: Holger Kiehl <Holger.Kiehl@dwd.de>
> >Subject: Re: 3ware escalade vs software raid, from a different jeff
> >Date: Thu, 19 Feb 2004 11:49:09 +0000 (GMT)
>
> >I have a system running for nearly three years distributing some 2.3
> >million
> >files with 200GB daily. This is with linux software raid and have
> >encountered
> >absolutly no problems. During the same period another system (not linux)
> >with
> >a similar workload but with hardware raid has failed twice, once makeing
> >all data useless.
>
> I am curious as to which filesystem you are using on that server. I've asked
> questions on other
> mailing lists before regarding journaling filesystems but it's always
> interesting to see how people
> are using them in real-life scenarios. Currently I'm trying to choose
> between Reiser and XFS.
>
I am using ext2.
Holger
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: 3ware escalade vs software raid, from a different jeff
2004-02-23 22:23 Jeff Gray
2004-02-24 8:06 ` Holger Kiehl
@ 2004-02-24 14:16 ` Joshua Baker-LePain
1 sibling, 0 replies; 23+ messages in thread
From: Joshua Baker-LePain @ 2004-02-24 14:16 UTC (permalink / raw)
To: Jeff Gray; +Cc: Holger.Kiehl, linux-raid
On Mon, 23 Feb 2004 at 5:23pm, Jeff Gray wrote
> I am curious as to which filesystem you are using on that server. I've asked
> questions on other
> mailing lists before regarding journaling filesystems but it's always
> interesting to see how people
> are using them in real-life scenarios. Currently I'm trying to choose
> between Reiser and XFS.
It very much depends on your workload and typical file size. I use XFS on
my big 3ware based servers (2 2TB servers, and 1 1TB), and generally am
*very* pleased with it. It's very popular on the linux-ide-arrays list as
well.
XFS usually is very fast. Where it falls down (at least, I think it's the
culprit) is when you have *lots* of very small files. One directory on
one of my servers is ~450 GB big and has ~3.4M files in ~29K
subdirectories (avg file size 140Kb). Operations on that directory are
noticeably slow. For that sort of workload, I'd think Reiser would
be better.
And lots of folks just stick with ext3. The best answer, of course, is to
test with something that as closely as possible resembles your expected
workload.
--
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University
^ permalink raw reply [flat|nested] 23+ messages in thread
end of thread, other threads:[~2004-02-24 14:16 UTC | newest]
Thread overview: 23+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-02-18 21:48 3ware escalade vs software raid, from a different jeff Rev. Jeffrey Paul
2004-02-18 22:58 ` Jeff Garzik
2004-02-19 11:49 ` Holger Kiehl
2004-02-19 12:06 ` Joshua Baker-LePain
2004-02-19 17:56 ` Rev. Jeffrey Paul
2004-02-19 19:58 ` Michael
2004-02-19 20:18 ` Ricky Beam
2004-02-19 22:04 ` Scsi adapters and software raid Bob Hillegas
2004-02-20 0:33 ` Kanoa Withington
2004-02-20 4:27 ` Jeff Garzik
2004-02-19 22:33 ` 3ware escalade vs software raid, from a different jeff Scott Long
2004-02-19 23:52 ` Guy
2004-02-19 22:29 ` Scott Long
2004-02-20 4:26 ` Jeff Garzik
2004-02-20 4:40 ` Scott Long
2004-02-20 0:38 ` Jeff Garzik
2004-02-20 7:19 ` Joshua Baker-LePain
2004-02-19 12:11 ` Måns Rullgård
2004-02-19 12:32 ` Holger Kiehl
2004-02-19 12:32 ` Jeff Garzik
-- strict thread matches above, loose matches on Subject: below --
2004-02-23 22:23 Jeff Gray
2004-02-24 8:06 ` Holger Kiehl
2004-02-24 14:16 ` Joshua Baker-LePain
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).