* Re: Ok to go ahead with this setup?
2006-06-22 22:11 ` Molle Bestefich
@ 2006-06-23 2:44 ` H. Peter Anvin
2006-06-23 3:27 ` Bill Davidsen
` (2 subsequent siblings)
3 siblings, 0 replies; 19+ messages in thread
From: H. Peter Anvin @ 2006-06-23 2:44 UTC (permalink / raw)
To: Molle Bestefich; +Cc: Christian Pernegger, linux-raid
Molle Bestefich wrote:
> Christian Pernegger wrote:
>> Anything specific wrong with the Maxtors?
>
> No. I've used Maxtor for a long time and I'm generally happy with them.
>
> They break now and then, but their online warranty system is great.
> I've also been treated kindly by their help desk - talked to a cute
> gal from Maxtor in Ireland over the phone just yesterday ;-).
>
> Then again, they've just been acquired by Seagate, or so, so things
> may change for the worse, who knows.
>
> I'd watch out regarding the Western Digital disks, apparently they
> have a bad habit of turning themselves off when used in RAID mode, for
> some reason:
> http://thread.gmane.org/gmane.linux.kernel.device-mapper.devel/1980/
>
I have exactly the opposite experience. More than 50% of Maxtor drives
fail inside 18 months; WDs seem to be really solid.
-hpa
^ permalink raw reply [flat|nested] 19+ messages in thread* Re: Ok to go ahead with this setup?
2006-06-22 22:11 ` Molle Bestefich
2006-06-23 2:44 ` H. Peter Anvin
@ 2006-06-23 3:27 ` Bill Davidsen
2006-06-23 6:45 ` Ricky Beam
2006-06-28 2:42 ` Mike Dresser
3 siblings, 0 replies; 19+ messages in thread
From: Bill Davidsen @ 2006-06-23 3:27 UTC (permalink / raw)
To: Molle Bestefich; +Cc: Christian Pernegger, linux-raid
Molle Bestefich wrote:
> Christian Pernegger wrote:
>
>> Anything specific wrong with the Maxtors?
>
>
> No. I've used Maxtor for a long time and I'm generally happy with them.
>
> They break now and then, but their online warranty system is great.
> I've also been treated kindly by their help desk - talked to a cute
> gal from Maxtor in Ireland over the phone just yesterday ;-).
>
> Then again, they've just been acquired by Seagate, or so, so things
> may change for the worse, who knows.
>
> I'd watch out regarding the Western Digital disks, apparently they
> have a bad habit of turning themselves off when used in RAID mode, for
> some reason:
> http://thread.gmane.org/gmane.linux.kernel.device-mapper.devel/1980/
Based on three trials in five years, I'm happy with WD and Seagate. WD
didn't ask when I bought it, just the serial for manufacturing date.
--
bill davidsen <davidsen@tmr.com>
CTO TMR Associates, Inc
Doing interesting things with small computers since 1979
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Ok to go ahead with this setup?
2006-06-22 22:11 ` Molle Bestefich
2006-06-23 2:44 ` H. Peter Anvin
2006-06-23 3:27 ` Bill Davidsen
@ 2006-06-23 6:45 ` Ricky Beam
2006-06-28 2:42 ` Mike Dresser
3 siblings, 0 replies; 19+ messages in thread
From: Ricky Beam @ 2006-06-23 6:45 UTC (permalink / raw)
To: Molle Bestefich; +Cc: linux-raid
On Fri, 23 Jun 2006, Molle Bestefich wrote:
>I'd watch out regarding the Western Digital disks, apparently they
>have a bad habit of turning themselves off when used in RAID mode, for
>some reason:
>http://thread.gmane.org/gmane.linux.kernel.device-mapper.devel/1980/
Where "for some reason" == HEAT. I've seen Maxtor, Seagate, AND Western
Digital drives all shutdown when they get too hot -- so hot you cannot
touch them. I know this all too well because Dell is stupid or lazy
to design their cases with proper ventilation over the drives; one drive
simply gets hot, two drives get hot enough to discolor their plastic
drive sleds.
Unless you're talking about little laptop drives, hard drives need active
cooling. A few CFM is usually enough. A LOT of people underestimate
the cooling needs of their drives. (and sadly that includes far too many
manufacturers of IDE/SATA drive cages.)
--Ricky
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Ok to go ahead with this setup?
2006-06-22 22:11 ` Molle Bestefich
` (2 preceding siblings ...)
2006-06-23 6:45 ` Ricky Beam
@ 2006-06-28 2:42 ` Mike Dresser
2006-06-28 6:23 ` bart
2006-06-28 7:25 ` Christian Pernegger
3 siblings, 2 replies; 19+ messages in thread
From: Mike Dresser @ 2006-06-28 2:42 UTC (permalink / raw)
To: linux-raid
On Fri, 23 Jun 2006, Molle Bestefich wrote:
> Christian Pernegger wrote:
>> Anything specific wrong with the Maxtors?
>
> I'd watch out regarding the Western Digital disks, apparently they
> have a bad habit of turning themselves off when used in RAID mode, for
> some reason:
> http://thread.gmane.org/gmane.linux.kernel.device-mapper.devel/1980/
The MaxLine III's (7V300F0) with VA111630/670 firmware currently timeout
on a weekly or less basis.. I'm still testing VA111680 on a 15x300 gig
array
Mike
^ permalink raw reply [flat|nested] 19+ messages in thread* Re: Ok to go ahead with this setup?
2006-06-28 2:42 ` Mike Dresser
@ 2006-06-28 6:23 ` bart
2006-06-28 6:45 ` Brad Campbell
2006-06-28 10:18 ` Justin Piszcz
2006-06-28 7:25 ` Christian Pernegger
1 sibling, 2 replies; 19+ messages in thread
From: bart @ 2006-06-28 6:23 UTC (permalink / raw)
To: Mike Dresser; +Cc: linux-raid
Mike Dresser wrote:
>
> On Fri, 23 Jun 2006, Molle Bestefich wrote:
>
> > Christian Pernegger wrote:
> >> Anything specific wrong with the Maxtors?
> >
> > I'd watch out regarding the Western Digital disks, apparently they
> > have a bad habit of turning themselves off when used in RAID mode, for
> > some reason:
> > http://thread.gmane.org/gmane.linux.kernel.device-mapper.devel/1980/
>
> The MaxLine III's (7V300F0) with VA111630/670 firmware currently timeout
> on a weekly or less basis.. I'm still testing VA111680 on a 15x300 gig
> array
>
We also see similar problem on Maxtor 6V250F0 drives: they 'crash' randomly at
a weeks timescale. Only way to get them back is by power cycling. Tried both
SuperMicro SATA card (Marvell chip) and Promise Fastrak, firmware updates from
Maxtor did not fix it yet. We were already forced to exchange all drives at
a customer because he does not want to use Maxtor's anymore. Neither do we :(
Bart
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Ok to go ahead with this setup?
2006-06-28 6:23 ` bart
@ 2006-06-28 6:45 ` Brad Campbell
2006-06-28 10:18 ` Justin Piszcz
1 sibling, 0 replies; 19+ messages in thread
From: Brad Campbell @ 2006-06-28 6:45 UTC (permalink / raw)
To: bart; +Cc: Mike Dresser, linux-raid
bart@ardistech.com wrote:
> Mike Dresser wrote:
>> On Fri, 23 Jun 2006, Molle Bestefich wrote:
>>
>>> Christian Pernegger wrote:
>>>> Anything specific wrong with the Maxtors?
>>> I'd watch out regarding the Western Digital disks, apparently they
>>> have a bad habit of turning themselves off when used in RAID mode, for
>>> some reason:
>>> http://thread.gmane.org/gmane.linux.kernel.device-mapper.devel/1980/
>> The MaxLine III's (7V300F0) with VA111630/670 firmware currently timeout
>> on a weekly or less basis.. I'm still testing VA111680 on a 15x300 gig
>> array
>>
> We also see similar problem on Maxtor 6V250F0 drives: they 'crash' randomly at
> a weeks timescale. Only way to get them back is by power cycling. Tried both
> SuperMicro SATA card (Marvell chip) and Promise Fastrak, firmware updates from
> Maxtor did not fix it yet. We were already forced to exchange all drives at
> a customer because he does not want to use Maxtor's anymore. Neither do we :(
Whereas I have 28 7Y250M0 drives sitting in a couple of arrays here that have behaved perfectly
(aside from some grown defects) for over 18000 hours so far. They are *all* sitting on Promise
SATA150TX4 cards on 2.6 kernels.
I'm looking at another server and another 15 drives at the moment, and it's Maxtors I'm looking at.
Everyone has different experience. I would not touch Seagate with a 10 foot pole (blew up way too
many logic boards when I was using them), and I got bitten *badly* by the WD firmware issue with
RAID (firmware upgrade fixed that, but can't replace the data I lost when 3 of them failed at the
same time and the array got corrupted).
Having said that, it was MaxLineIII 300G drives I was looking at, so perhaps I'll wait a little
longer and hear some more stories before I drop $$ on 15 of them.
Brad
--
"Human beings, who are almost unique in having the ability
to learn from the experience of others, are also remarkable
for their apparent disinclination to do so." -- Douglas Adams
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Ok to go ahead with this setup?
2006-06-28 6:23 ` bart
2006-06-28 6:45 ` Brad Campbell
@ 2006-06-28 10:18 ` Justin Piszcz
2006-06-28 15:23 ` Mike Dresser
1 sibling, 1 reply; 19+ messages in thread
From: Justin Piszcz @ 2006-06-28 10:18 UTC (permalink / raw)
To: bart@ardistech.com; +Cc: Mike Dresser, linux-raid
On Wed, 28 Jun 2006, bart@ardistech.com wrote:
> Mike Dresser wrote:
>>
>> On Fri, 23 Jun 2006, Molle Bestefich wrote:
>>
>>> Christian Pernegger wrote:
>>>> Anything specific wrong with the Maxtors?
>>>
>>> I'd watch out regarding the Western Digital disks, apparently they
>>> have a bad habit of turning themselves off when used in RAID mode, for
>>> some reason:
>>> http://thread.gmane.org/gmane.linux.kernel.device-mapper.devel/1980/
>>
>> The MaxLine III's (7V300F0) with VA111630/670 firmware currently timeout
>> on a weekly or less basis.. I'm still testing VA111680 on a 15x300 gig
>> array
>>
> We also see similar problem on Maxtor 6V250F0 drives: they 'crash' randomly at
> a weeks timescale. Only way to get them back is by power cycling. Tried both
> SuperMicro SATA card (Marvell chip) and Promise Fastrak, firmware updates from
> Maxtor did not fix it yet. We were already forced to exchange all drives at
> a customer because he does not want to use Maxtor's anymore. Neither do we :(
>
> Bart
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
> The MaxLine III's (7V300F0) with VA111630/670 firmware currently timeout
> on a weekly or less basis.. I'm still testing VA111680 on a 15x300 gig
> array
How do you have the 15 drives attached? Did you buy a SATA raid card? Do
you have multiple (cheap JBOD SATA cards)? If so, which did you use? I
cannot seem to find any PCI-e cards with >= 4-8 slots that support JBOD
under $700-$900.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Ok to go ahead with this setup?
2006-06-28 10:18 ` Justin Piszcz
@ 2006-06-28 15:23 ` Mike Dresser
0 siblings, 0 replies; 19+ messages in thread
From: Mike Dresser @ 2006-06-28 15:23 UTC (permalink / raw)
To: Justin Piszcz; +Cc: linux-raid
On Wed, 28 Jun 2006, Justin Piszcz wrote:
> How do you have the 15 drives attached? Did you buy a SATA raid card? Do you
> have multiple (cheap JBOD SATA cards)? If so, which did you use? I cannot
> seem to find any PCI-e cards with >= 4-8 slots that support JBOD under
> $700-$900.
We're using a 3ware 9550SX-16ML, which is a 133mhz pci-x card. They also
have a 9590SE that does the same with PCI-E, though I don't know if the
stock 2.6.x kernel supports these yet.
Mike
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Ok to go ahead with this setup?
2006-06-28 2:42 ` Mike Dresser
2006-06-28 6:23 ` bart
@ 2006-06-28 7:25 ` Christian Pernegger
2006-06-28 7:58 ` bart
2006-06-28 8:19 ` Drive issues in RAID vs. not-RAID Gordon Henderson
1 sibling, 2 replies; 19+ messages in thread
From: Christian Pernegger @ 2006-06-28 7:25 UTC (permalink / raw)
To: linux-raid
> The MaxLine III's (7V300F0) with VA111630/670 firmware currently timeout
> on a weekly or less basis..
I have just one 7V300F0, so no idea how it behaves is a RAID. It's
been fine apart from the fact that my VIA southbridge SATA controllers
doesn't even detect it ... :(
(Anyone else notice compatibility problems are through the roof lately?)
The 8x 6B300R0 (PATA) have been excellent on my PATA 3ware.
Untypically for Maxtor, not one has died yet (over a year) :)
The 8x 6Y120L0 (PATA) died at a rate of about two a year.
I mainly use Maxtor due to the fact that the RMA process is automated,
hassle-free and fast. They will exchange a drive when the first bad
sector errors start to show up and not insist on a low level format to
"fix" the problem.
The Atlas SCSI line they inherited from Quantum isn't half-bad either.
What happens now that Seagate has bought them nobody knows.
C.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Ok to go ahead with this setup?
2006-06-28 7:25 ` Christian Pernegger
@ 2006-06-28 7:58 ` bart
2006-06-28 8:19 ` Drive issues in RAID vs. not-RAID Gordon Henderson
1 sibling, 0 replies; 19+ messages in thread
From: bart @ 2006-06-28 7:58 UTC (permalink / raw)
To: Christian Pernegger; +Cc: linux-raid
Christian Pernegger wrote:
>
> > The MaxLine III's (7V300F0) with VA111630/670 firmware currently timeout
> > on a weekly or less basis..
>
> I have just one 7V300F0, so no idea how it behaves is a RAID. It's
> been fine apart from the fact that my VIA southbridge SATA controllers
> doesn't even detect it ... :(
>
You'll need a drive firmware update for this..
> (Anyone else notice compatibility problems are through the roof lately?)
>
Yes, the manufacturers are busy with SATAII while SATAI still not being stable....
> The 8x 6B300R0 (PATA) have been excellent on my PATA 3ware.
> Untypically for Maxtor, not one has died yet (over a year) :)
>
> The 8x 6Y120L0 (PATA) died at a rate of about two a year.
>
> I mainly use Maxtor due to the fact that the RMA process is automated,
> hassle-free and fast. They will exchange a drive when the first bad
> sector errors start to show up and not insist on a low level format to
> "fix" the problem.
>
They did not offer to exchage our +30 drives that are having timeouts....
^ permalink raw reply [flat|nested] 19+ messages in thread
* Drive issues in RAID vs. not-RAID ..
2006-06-28 7:25 ` Christian Pernegger
2006-06-28 7:58 ` bart
@ 2006-06-28 8:19 ` Gordon Henderson
2006-06-29 3:51 ` Neil Brown
1 sibling, 1 reply; 19+ messages in thread
From: Gordon Henderson @ 2006-06-28 8:19 UTC (permalink / raw)
To: linux-raid
I've seen a few comments to the effect that some disks have problems when
used in a RAID setup and I'm a bit preplexed as to why this might be..
What's the difference between a drive in a RAID set (either s/w or h/w)
and a drive on it's own, assuming the load, etc. is roughly the same in
each setup?
Is it just "bad feeling" or is there any scientific reasons for it?
Cheers,
Gordon
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Drive issues in RAID vs. not-RAID ..
2006-06-28 8:19 ` Drive issues in RAID vs. not-RAID Gordon Henderson
@ 2006-06-29 3:51 ` Neil Brown
0 siblings, 0 replies; 19+ messages in thread
From: Neil Brown @ 2006-06-29 3:51 UTC (permalink / raw)
To: Gordon Henderson; +Cc: linux-raid
On Wednesday June 28, gordon@drogon.net wrote:
>
> I've seen a few comments to the effect that some disks have problems when
> used in a RAID setup and I'm a bit preplexed as to why this might be..
>
> What's the difference between a drive in a RAID set (either s/w or h/w)
> and a drive on it's own, assuming the load, etc. is roughly the same in
> each setup?
>
> Is it just "bad feeling" or is there any scientific reasons for it?
I don't think that 'disks' have problems being in a raid, but I
believe some controllers do (though I don't know whether it is the
controller or the driver that is at fault). RAID make concurrent
requests much more likely and so is likely to push hard at any locking
issues.
NeilBrown
^ permalink raw reply [flat|nested] 19+ messages in thread