linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* disks becoming slow but not explicitly failing anyone?
@ 2006-04-22 20:05 Carlos Carvalho
  2006-04-23  0:45 ` Mark Hahn
  2006-04-27  3:31 ` Konstantin Olchanski
  0 siblings, 2 replies; 7+ messages in thread
From: Carlos Carvalho @ 2006-04-22 20:05 UTC (permalink / raw)
  To: linux-raid

We've been hit by a strange problem for about 9 months already. Our
main server suddenly becomes very unresponsive, the load skyrockets
and if demand is high enough it collapses. top shows many processes
stuck in D state. There are no raid or disk error messages, either in
the console or logs.

The machine has 4 IDE disks in a software raid5 array, connected to a
3Ware 7506. Only once I saw warnings of scsi resets of the 3Ware due
to timeouts.

This 3Ware card has leds which are on when there's activity in the IDE
channel. As expected, all leds turn on and off almost simultaneously
during normal operation of the raid5, however when the problem appears
one of the leds stays on much longer than the others for each burst of
activity. This shows that the disk is getting much slower than the
others, holding the whole array.

Several times a smart test of the disk shows read failures but not
always. I've changed cables, 3Ware card and even connected the slow
disk in the IDE channel of the motherboard to no avail. Changing the
disk and reconstructing the array restores normal operation.

This has happened with 7 (seven!!) disks already, 80GB and 120GB,
Maxtor and Seagate. Has anyone else seen this?

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: disks becoming slow but not explicitly failing anyone?
  2006-04-22 20:05 disks becoming slow but not explicitly failing anyone? Carlos Carvalho
@ 2006-04-23  0:45 ` Mark Hahn
  2006-04-23 13:38   ` Nix
  2006-04-27  3:31 ` Konstantin Olchanski
  1 sibling, 1 reply; 7+ messages in thread
From: Mark Hahn @ 2006-04-23  0:45 UTC (permalink / raw)
  To: Carlos Carvalho; +Cc: linux-raid

> This has happened with 7 (seven!!) disks already, 80GB and 120GB,
> Maxtor and Seagate. Has anyone else seen this?

sure.  but you should be asking what's wrong with your environment
or supply chain to cause this kind of disk degradation.  how's your 
cooling?  power?  some people claim that if you put a normal (desktop)
drive into a 24x7 server (with real round-the-clock load), you should 
expect failures quite promptly.  I'm inclined to believe that with 
MTBF's upwards of 1M hour, vendors would not claim a 3-5yr warranty
unless the actual failure rate was low, even if only running 8/24.


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: disks becoming slow but not explicitly failing anyone?
  2006-04-23  0:45 ` Mark Hahn
@ 2006-04-23 13:38   ` Nix
  2006-04-23 18:04     ` Mark Hahn
  0 siblings, 1 reply; 7+ messages in thread
From: Nix @ 2006-04-23 13:38 UTC (permalink / raw)
  To: Mark Hahn; +Cc: Carlos Carvalho, linux-raid

On 23 Apr 2006, Mark Hahn said:
>                   some people claim that if you put a normal (desktop)
> drive into a 24x7 server (with real round-the-clock load), you should 
> expect failures quite promptly.  I'm inclined to believe that with 
> MTBF's upwards of 1M hour, vendors would not claim a 3-5yr warranty
> unless the actual failure rate was low, even if only running 8/24.

I've seen a lot of cheap disks say (generally deep in the data sheet
that's only available online after much searching and that nobody ever
reads) that they are only reliable if used for a maximum of twelve hours
a day, or 90 hours a week, or something of that nature. Even server
disks generally seem to say something like that, but the figure given is
more like `168 hours a week', i.e., constant use.

It still stuns me that anyone would ever voluntarily buy drives that
can't be left switched on (which is perhaps why the manufacturers hide
the info in such an obscure place), and I don't know what might go wrong
if you use the disk `too much': overheating?

But still it seems that there are crappy disks out there with very
silly limits on the time they can safely be used for.

(But this *is* the RAID list: we know that disks suck, right?)

-- 
`On a scale of 1-10, X's "brokenness rating" is 1.1, but that's only
 because bringing Windows into the picture rescaled "brokenness" by
 a factor of 10.' --- Peter da Silva

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: disks becoming slow but not explicitly failing anyone?
  2006-04-23 13:38   ` Nix
@ 2006-04-23 18:04     ` Mark Hahn
  2006-04-24 19:20       ` Nix
  0 siblings, 1 reply; 7+ messages in thread
From: Mark Hahn @ 2006-04-23 18:04 UTC (permalink / raw)
  To: linux-raid

> I've seen a lot of cheap disks say (generally deep in the data sheet
> that's only available online after much searching and that nobody ever
> reads) that they are only reliable if used for a maximum of twelve hours
> a day, or 90 hours a week, or something of that nature. Even server

I haven't, and I read lots of specs.  they _will_ sometimes say that 
non-enterprise drives are "intended" or "designed" for a 8x5 desktop-like
usage pattern.  to the normal way of thinking about reliability, this would 
simply mean a factor of 4.2x lower reliability - say from 1M to 250K hours
MTBF.  that's still many times lower rate of failure than power supplies or 
fans.

> It still stuns me that anyone would ever voluntarily buy drives that
> can't be left switched on (which is perhaps why the manufacturers hide

I've definitely never seen any spec that stated that the drive had to be 
switched off.  the issue is really just "what is the designed duty-cycle?"

I run a number of servers which are used as compute clusters.  load is
definitely 24x7, since my users always keep the queues full.  but the servers
are not maxed out 24x7, and do work quite nicely with desktop drives
for years at a time.  it's certainly also significant that these are in a 
decent machineroom environment.

it's unfortunate that disk vendors aren't more forthcoming with their drive
stats.  for instance, it's obvious that "wear" in MTBF terms would depend 
nonlinearly on the duty cycle.  it's important for a customer to know where 
that curve bends, and to try to stay in the low-wear zone.  similarly, disk
specs often just give a max operating temperature (often 60C!), which is 
almost disingenuous, since temperature has a superlinear effect on reliability.

a system designer needs to evaluate the expected duty cycle when choosing
disks, as well as many other factors which are probably more important.
for instance, an earlier thread concerned a vast amount of read traffic 
to disks resulting from atime updates.  obviously, just mounting noatime 
will improve the system's reliability.  providing a bit more memory on a 
fileserver to cache and eliminate IOs is another great way to help out.
simply using more disks also decreases the load per disk, though this is 
clearly only a win if it's the difference in staying out of the disks 
"duty-cycle danger zone" (since more disks divide system MTBF).

regards, mark hahn.


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: disks becoming slow but not explicitly failing anyone?
  2006-04-23 18:04     ` Mark Hahn
@ 2006-04-24 19:20       ` Nix
  2006-05-05  0:45         ` Bill Davidsen
  0 siblings, 1 reply; 7+ messages in thread
From: Nix @ 2006-04-24 19:20 UTC (permalink / raw)
  To: Mark Hahn; +Cc: linux-raid

On 23 Apr 2006, Mark Hahn stipulated:
>> I've seen a lot of cheap disks say (generally deep in the data sheet
>> that's only available online after much searching and that nobody ever
>> reads) that they are only reliable if used for a maximum of twelve hours
>> a day, or 90 hours a week, or something of that nature. Even server
> 
> I haven't, and I read lots of specs.  they _will_ sometimes say that 
> non-enterprise drives are "intended" or "designed" for a 8x5 desktop-like
> usage pattern.

That's the phrasing, yes: foolish me assumed that meant `if you leave it
on for much longer than that, things will go wrong'.

>                 to the normal way of thinking about reliability, this would 
> simply mean a factor of 4.2x lower reliability - say from 1M to 250K hours
> MTBF.  that's still many times lower rate of failure than power supplies or 
> fans.

Ah, right, it's not a drastic change.

>> It still stuns me that anyone would ever voluntarily buy drives that
>> can't be left switched on (which is perhaps why the manufacturers hide
> 
> I've definitely never seen any spec that stated that the drive had to be 
> switched off.  the issue is really just "what is the designed duty-cycle?"

I see. So it's just `we didn't try to push the MTBF up as far as we would
on other sorts of disks'.

> I run a number of servers which are used as compute clusters.  load is
> definitely 24x7, since my users always keep the queues full.  but the servers
> are not maxed out 24x7, and do work quite nicely with desktop drives
> for years at a time.  it's certainly also significant that these are in a 
> decent machineroom environment.

Yeah; i.e., cooled. I don't have a cleanroom in my house so the RAID
array I run there is necessarily uncooled, and the alleged aircon in the
room housing work's array is permanently on the verge of total collapse
(I think it lowers the temperature, but not by much).

> it's unfortunate that disk vendors aren't more forthcoming with their drive
> stats.  for instance, it's obvious that "wear" in MTBF terms would depend 
> nonlinearly on the duty cycle.  it's important for a customer to know where 
> that curve bends, and to try to stay in the low-wear zone.  similarly, disk

Agreed! I tend to assume that non-laptop disks hate being turned on and
hate temperature changes, so just keep them running 24x7. This seems to be OK,
with the only disks this has ever killed being Hitachi server-class disks in
a very expensive Sun server which was itself meant for 24x7 operation; the
cheaper disks in my home systems were quite happy. (Go figure...)

> specs often just give a max operating temperature (often 60C!), which is 
> almost disingenuous, since temperature has a superlinear effect on reliability.

I'll say. I'm somewhat twitchy about the uncooled 37C disks in one of my
machines: but one of the other disks ran at well above 60C for *years*
without incident: it was an old one with no onboard temperature sensing,
and it was perhaps five years after startup that I opened that machine
for the first time in years and noticed that the disk housing nearly
burned me when I touched it. The guy who installed it said that yes, it
had always run that hot, and was that important? *gah*

I got a cooler for that disk in short order.

> a system designer needs to evaluate the expected duty cycle when choosing
> disks, as well as many other factors which are probably more important.
> for instance, an earlier thread concerned a vast amount of read traffic 
> to disks resulting from atime updates.

Oddly, I see a steady pulse of write traffic, ~100Kb/s, to one dm device
(translating into read+write on the underlying disks) even when the
system is quiescient, all daemons killed, and all fsen mounted with
noatime. One of these days I must fish out blktrace and see what's
causing it (but that machine is hard to quiesce like that: it's in heavy
use).

> simply using more disks also decreases the load per disk, though this is 
> clearly only a win if it's the difference in staying out of the disks 
> "duty-cycle danger zone" (since more disks divide system MTBF).

Well, yes, but if you have enough more you can make some of them spares
and push up the MTBF again (and the cooling requirements, and the power
consumption: I wish there was a way to spin down spares until they were
needed, but non-laptop controllers don't often seem to provide a way to
spin anything down at all that I know of).

-- 
`On a scale of 1-10, X's "brokenness rating" is 1.1, but that's only
 because bringing Windows into the picture rescaled "brokenness" by
 a factor of 10.' --- Peter da Silva

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: disks becoming slow but not explicitly failing anyone?
  2006-04-22 20:05 disks becoming slow but not explicitly failing anyone? Carlos Carvalho
  2006-04-23  0:45 ` Mark Hahn
@ 2006-04-27  3:31 ` Konstantin Olchanski
  1 sibling, 0 replies; 7+ messages in thread
From: Konstantin Olchanski @ 2006-04-27  3:31 UTC (permalink / raw)
  To: Carlos Carvalho; +Cc: linux-raid

On Sat, Apr 22, 2006 at 05:05:34PM -0300, Carlos Carvalho wrote:
> We've been hit by a strange problem for about 9 months already. Our
> main server suddenly becomes very unresponsive, the load skyrockets
> and if demand is high enough it collapses. top shows many processes
> stuck in D state. There are no raid or disk error messages, either in
> the console or logs.

Yes, I see similar behaviour with IDE and SATA disks, on random
interfaces, including one 3ware 8506-12 SATA 12 port unit.

I am using a disk testing program that basically
does "dd if=/dev/disk of=/dev/null" but does not give up
on i/o errors and that also measures and reports
the response time of every read() system call. I run
this program on all my disks every Tuesday.

Most disks respond to every read in under 1 second. (There
is always some variation and delays caused by other
programs accessing the disks while the test is running).

Sometimes, some disks take 5-10 seconds to respond and
I now consider this "normal". It's "just" "hard to read" sectors.

Sometimes, some disks take 30-40 seconds to respond
and sometimes result in i/o errors to the user code (timeout + reset
on the hardware side). Sometimes SMART errors errors would be logged,
but not always. The "md" driver does not like these errors
and causes RAID5 and "RAID1/mirror" faults. "RAID0/striped" arrays
seem to survive. I consider these disks as "defective" and replace
them as soon as possible. They usually fail vendor diagnostics
and I do warranty exchanges.

I once had a disk that one some days does all reads in under 1 sec,
but on other days, takes more than 30 seconds (ide timeout + reset +
i/o error). It is probably correlated to the disk temperature.

I now have two SATA disks in the same enclosure: one consistently
gives i/o errors (there is one unreadable bad sector, also
reported by SMART), the other one gives errors maybe every other
time (i.e. it has "hard to read" sector). (For logistics reasons
I am slow at replacing both disks).

K.O.


> 
> The machine has 4 IDE disks in a software raid5 array, connected to a
> 3Ware 7506. Only once I saw warnings of scsi resets of the 3Ware due
> to timeouts.
> 
> This 3Ware card has leds which are on when there's activity in the IDE
> channel. As expected, all leds turn on and off almost simultaneously
> during normal operation of the raid5, however when the problem appears
> one of the leds stays on much longer than the others for each burst of
> activity. This shows that the disk is getting much slower than the
> others, holding the whole array.
> 
> Several times a smart test of the disk shows read failures but not
> always. I've changed cables, 3Ware card and even connected the slow
> disk in the IDE channel of the motherboard to no avail. Changing the
> disk and reconstructing the array restores normal operation.
> 
> This has happened with 7 (seven!!) disks already, 80GB and 120GB,
> Maxtor and Seagate. Has anyone else seen this?
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

-- 
Konstantin Olchanski
Data Acquisition Systems: The Bytes Must Flow!
Email: olchansk-at-triumf-dot-ca
Snail mail: 4004 Wesbrook Mall, TRIUMF, Vancouver, B.C., V6T 2A3, Canada

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: disks becoming slow but not explicitly failing anyone?
  2006-04-24 19:20       ` Nix
@ 2006-05-05  0:45         ` Bill Davidsen
  0 siblings, 0 replies; 7+ messages in thread
From: Bill Davidsen @ 2006-05-05  0:45 UTC (permalink / raw)
  To: Nix; +Cc: Mark Hahn, linux-raid

Nix wrote:

>On 23 Apr 2006, Mark Hahn stipulated:
>  
>
>>>I've seen a lot of cheap disks say (generally deep in the data sheet
>>>that's only available online after much searching and that nobody ever
>>>reads) that they are only reliable if used for a maximum of twelve hours
>>>a day, or 90 hours a week, or something of that nature. Even server
>>>      
>>>
>>I haven't, and I read lots of specs.  they _will_ sometimes say that 
>>non-enterprise drives are "intended" or "designed" for a 8x5 desktop-like
>>usage pattern.
>>    
>>
>
>That's the phrasing, yes: foolish me assumed that meant `if you leave it
>on for much longer than that, things will go wrong'.
>
>  
>
>>                to the normal way of thinking about reliability, this would 
>>simply mean a factor of 4.2x lower reliability - say from 1M to 250K hours
>>MTBF.  that's still many times lower rate of failure than power supplies or 
>>fans.
>>    
>>
>
>Ah, right, it's not a drastic change.
>
>  
>
>>>It still stuns me that anyone would ever voluntarily buy drives that
>>>can't be left switched on (which is perhaps why the manufacturers hide
>>>      
>>>
>>I've definitely never seen any spec that stated that the drive had to be 
>>switched off.  the issue is really just "what is the designed duty-cycle?"
>>    
>>
>
>I see. So it's just `we didn't try to push the MTBF up as far as we would
>on other sorts of disks'.
>
>  
>
>>I run a number of servers which are used as compute clusters.  load is
>>definitely 24x7, since my users always keep the queues full.  but the servers
>>are not maxed out 24x7, and do work quite nicely with desktop drives
>>for years at a time.  it's certainly also significant that these are in a 
>>decent machineroom environment.
>>    
>>
>
>Yeah; i.e., cooled. I don't have a cleanroom in my house so the RAID
>array I run there is necessarily uncooled, and the alleged aircon in the
>room housing work's array is permanently on the verge of total collapse
>(I think it lowers the temperature, but not by much).
>
>  
>
>>it's unfortunate that disk vendors aren't more forthcoming with their drive
>>stats.  for instance, it's obvious that "wear" in MTBF terms would depend 
>>nonlinearly on the duty cycle.  it's important for a customer to know where 
>>that curve bends, and to try to stay in the low-wear zone.  similarly, disk
>>    
>>
>
>Agreed! I tend to assume that non-laptop disks hate being turned on and
>hate temperature changes, so just keep them running 24x7. This seems to be OK,
>with the only disks this has ever killed being Hitachi server-class disks in
>a very expensive Sun server which was itself meant for 24x7 operation; the
>cheaper disks in my home systems were quite happy. (Go figure...)
>
>  
>
>>specs often just give a max operating temperature (often 60C!), which is 
>>almost disingenuous, since temperature has a superlinear effect on reliability.
>>    
>>
>
>I'll say. I'm somewhat twitchy about the uncooled 37C disks in one of my
>machines: but one of the other disks ran at well above 60C for *years*
>without incident: it was an old one with no onboard temperature sensing,
>and it was perhaps five years after startup that I opened that machine
>for the first time in years and noticed that the disk housing nearly
>burned me when I touched it. The guy who installed it said that yes, it
>had always run that hot, and was that important? *gah*
>
>I got a cooler for that disk in short order.
>
>  
>
>>a system designer needs to evaluate the expected duty cycle when choosing
>>disks, as well as many other factors which are probably more important.
>>for instance, an earlier thread concerned a vast amount of read traffic 
>>to disks resulting from atime updates.
>>    
>>
>
>Oddly, I see a steady pulse of write traffic, ~100Kb/s, to one dm device
>(translating into read+write on the underlying disks) even when the
>system is quiescient, all daemons killed, and all fsen mounted with
>noatime. One of these days I must fish out blktrace and see what's
>causing it (but that machine is hard to quiesce like that: it's in heavy
>use).
>
>  
>
>>simply using more disks also decreases the load per disk, though this is 
>>clearly only a win if it's the difference in staying out of the disks 
>>"duty-cycle danger zone" (since more disks divide system MTBF).
>>    
>>
>
>Well, yes, but if you have enough more you can make some of them spares
>and push up the MTBF again (and the cooling requirements, and the power
>consumption: I wish there was a way to spin down spares until they were
>needed, but non-laptop controllers don't often seem to provide a way to
>spin anything down at all that I know of).
>
>  
>
hdparam will let you set the spindown time. I have all mine set that way 
for power and heat reasons, they tend to be in burst use. Dropped the CR 
temp by enough to notice, but I need some more local cooling for that 
room still.

-- 
bill davidsen <davidsen@tmr.com>
  CTO TMR Associates, Inc
  Doing interesting things with small computers since 1979


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2006-05-05  0:45 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-04-22 20:05 disks becoming slow but not explicitly failing anyone? Carlos Carvalho
2006-04-23  0:45 ` Mark Hahn
2006-04-23 13:38   ` Nix
2006-04-23 18:04     ` Mark Hahn
2006-04-24 19:20       ` Nix
2006-05-05  0:45         ` Bill Davidsen
2006-04-27  3:31 ` Konstantin Olchanski

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).