* Recent drive errors
@ 2015-05-19 11:08 Thomas Fjellstrom
2015-05-19 12:34 ` Phil Turmel
0 siblings, 1 reply; 13+ messages in thread
From: Thomas Fjellstrom @ 2015-05-19 11:08 UTC (permalink / raw)
To: linux-raid@vger.kernel.org
Hi,
I have this one drive that dropped out of one of my arrays once. It shows UNC
errors in SMART (log appended), and Reported_Uncorrect is 5. There are no
smart test failures or any other SMART values that look spectacularly wrong,
other than maybe Load_Cycle_Count which is 10625 (these seagates used to
constantly park and unpark before i updated the firmware).
I'm wondering whether or not this drive is still safe to use. I feel like I
can't trust it, especially after all the other Seagates I had that failed in
the past few years. I'm running a tool called whdd on it right now and it
shows very consistent latency spikes above 150ms. Really, I'm wondering if
this drive is RMAable as is, or if i have to wait for it to degrade further as
i have another drive with like 10k reallocated sectors to send in. I have
already replaced both with WD Red's so I can do whatever tests are needed to
figure it out.
Thanks for any help,
SMART log:
# smartctl -a /dev/sdf
smartctl 6.4 2014-10-07 r4002 [x86_64-linux-4.0.0-1-amd64] (local build)
Copyright (C) 2002-14, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Family: Seagate Barracuda 7200.14 (AF)
Device Model: ST3000DM001-9YN166
Serial Number: W1F2G312
LU WWN Device Id: 5 000c50 060014689
Firmware Version: CC4H
User Capacity: 3,000,592,982,016 bytes [3.00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 7200 rpm
Device is: In smartctl database [for details use: -P show]
ATA Version is: ATA8-ACS T13/1699-D revision 4
SATA Version is: SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Tue May 19 04:42:33 2015 MDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART Status not supported: Incomplete response, ATA output registers missing
SMART overall-health self-assessment test result: PASSED
Warning: This result is based on an Attribute check.
General SMART Values:
Offline data collection status: (0x82) Offline data collection activity
was completed without error.
Auto Offline Data Collection: Enabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: ( 592) seconds.
Offline data collection
capabilities: (0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 1) minutes.
Extended self-test routine
recommended polling time: ( 345) minutes.
Conveyance self-test routine
recommended polling time: ( 2) minutes.
SCT capabilities: (0x3085) SCT Status supported.
SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x000f 115 099 006 Pre-fail Always - 100468288
3 Spin_Up_Time 0x0003 092 091 000 Pre-fail Always - 0
4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 526
5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 0
7 Seek_Error_Rate 0x000f 069 060 030 Pre-fail Always - 9795200
9 Power_On_Hours 0x0032 094 094 000 Old_age Always - 5590
10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0
12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 36
183 Runtime_Bad_Block 0x0032 100 100 000 Old_age Always - 0
184 End-to-End_Error 0x0032 100 100 099 Old_age Always - 0
187 Reported_Uncorrect 0x0032 095 095 000 Old_age Always - 5
188 Command_Timeout 0x0032 100 099 000 Old_age Always - 0 0 1
189 High_Fly_Writes 0x003a 099 099 000 Old_age Always - 1
190 Airflow_Temperature_Cel 0x0022 064 054 045 Old_age Always - 36 (Min/Max 35/36)
191 G-Sense_Error_Rate 0x0032 100 100 000 Old_age Always - 0
192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 524
193 Load_Cycle_Count 0x0032 095 095 000 Old_age Always - 10625
194 Temperature_Celsius 0x0022 036 046 000 Old_age Always - 36 (0 18 0 0 0)
197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0
240 Head_Flying_Hours 0x0000 100 253 000 Old_age Offline - 5572h+04m+02.074s
241 Total_LBAs_Written 0x0000 100 253 000 Old_age Offline - 44489827126891
242 Total_LBAs_Read 0x0000 100 253 000 Old_age Offline - 22877192047256
SMART Error Log Version: 1
ATA Error Count: 5
CR = Command Register [HEX]
FR = Features Register [HEX]
SC = Sector Count Register [HEX]
SN = Sector Number Register [HEX]
CL = Cylinder Low Register [HEX]
CH = Cylinder High Register [HEX]
DH = Device/Head Register [HEX]
DC = Device Command Register [HEX]
ER = Error register [HEX]
ST = Status register [HEX]
Powered_Up_Time is measured from power on, and printed as
DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
SS=sec, and sss=millisec. It "wraps" after 49.710 days.
Error 5 occurred at disk power-on lifetime: 5309 hours (221 days + 5 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
40 51 00 ff ff ff 0f Error: UNC at LBA = 0x0fffffff = 268435455
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
60 00 c0 ff ff ff 4f 00 48d+03:55:47.791 READ FPDMA QUEUED
61 00 80 ff ff ff 4f 00 48d+03:55:47.791 WRITE FPDMA QUEUED
e5 00 00 00 00 00 00 00 48d+03:55:47.784 CHECK POWER MODE
60 00 08 08 00 00 40 00 48d+03:55:47.755 READ FPDMA QUEUED
60 00 08 00 00 00 40 00 48d+03:55:47.751 READ FPDMA QUEUED
Error 4 occurred at disk power-on lifetime: 5309 hours (221 days + 5 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
40 51 00 ff ff ff 0f Error: UNC at LBA = 0x0fffffff = 268435455
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
60 00 00 ff ff ff 4f 00 48d+03:55:44.672 READ FPDMA QUEUED
60 00 00 ff ff ff 4f 00 48d+03:55:44.672 READ FPDMA QUEUED
60 00 c0 ff ff ff 4f 00 48d+03:55:44.672 READ FPDMA QUEUED
60 00 08 ff ff ff 4f 00 48d+03:55:44.672 READ FPDMA QUEUED
60 00 08 00 00 00 40 00 48d+03:55:44.672 READ FPDMA QUEUED
Error 3 occurred at disk power-on lifetime: 5309 hours (221 days + 5 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
40 51 00 ff ff ff 0f Error: UNC at LBA = 0x0fffffff = 268435455
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
60 00 00 ff ff ff 4f 00 48d+03:55:41.802 READ FPDMA QUEUED
60 00 08 ff ff ff 4f 00 48d+03:55:41.802 READ FPDMA QUEUED
60 00 c0 ff ff ff 4f 00 48d+03:55:41.802 READ FPDMA QUEUED
60 00 00 ff ff ff 4f 00 48d+03:55:41.802 READ FPDMA QUEUED
60 00 00 ff ff ff 4f 00 48d+03:55:41.801 READ FPDMA QUEUED
Error 2 occurred at disk power-on lifetime: 5309 hours (221 days + 5 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
40 51 00 ff ff ff 0f Error: UNC at LBA = 0x0fffffff = 268435455
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
60 00 08 ff ff ff 4f 00 48d+03:55:38.809 READ FPDMA QUEUED
60 00 08 ff ff ff 4f 00 48d+03:55:38.793 READ FPDMA QUEUED
60 00 00 ff ff ff 4f 00 48d+03:55:38.793 READ FPDMA QUEUED
60 00 00 ff ff ff 4f 00 48d+03:55:38.792 READ FPDMA QUEUED
60 00 00 ff ff ff 4f 00 48d+03:55:38.792 READ FPDMA QUEUED
Error 1 occurred at disk power-on lifetime: 5309 hours (221 days + 5 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
40 51 00 ff ff ff 0f Error: UNC at LBA = 0x0fffffff = 268435455
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
60 00 00 ff ff ff 4f 00 48d+03:55:35.636 READ FPDMA QUEUED
60 00 00 ff ff ff 4f 00 48d+03:55:35.636 READ FPDMA QUEUED
60 00 00 ff ff ff 4f 00 48d+03:55:35.636 READ FPDMA QUEUED
60 00 40 ff ff ff 4f 00 48d+03:55:35.636 READ FPDMA QUEUED
60 00 00 ff ff ff 4f 00 48d+03:55:35.636 READ FPDMA QUEUED
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Short offline Completed without error 00% 5574 -
# 2 Extended offline Completed without error 00% 5571 -
# 3 Extended offline Completed without error 00% 5561 -
# 4 Short offline Completed without error 00% 5556 -
# 5 Short offline Completed without error 00% 5553 -
# 6 Short offline Completed without error 00% 5529 -
# 7 Short offline Completed without error 00% 5505 -
# 8 Short offline Completed without error 00% 5481 -
# 9 Short offline Completed without error 00% 5457 -
#10 Short offline Completed without error 00% 5433 -
#11 Short offline Completed without error 00% 5409 -
#12 Short offline Completed without error 00% 5385 -
#13 Short offline Completed without error 00% 5361 -
#14 Short offline Completed without error 00% 5337 -
#15 Short offline Completed without error 00% 5313 -
#16 Short offline Completed without error 00% 5289 -
#17 Short offline Completed without error 00% 5265 -
#18 Short offline Completed without error 00% 5241 -
#19 Short offline Completed without error 00% 5217 -
#20 Short offline Completed without error 00% 5193 -
#21 Short offline Completed without error 00% 5169 -
SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
--
Thomas Fjellstrom
thomas@fjellstrom.ca
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Recent drive errors
2015-05-19 11:08 Recent drive errors Thomas Fjellstrom
@ 2015-05-19 12:34 ` Phil Turmel
2015-05-19 12:50 ` Thomas Fjellstrom
0 siblings, 1 reply; 13+ messages in thread
From: Phil Turmel @ 2015-05-19 12:34 UTC (permalink / raw)
To: thomas, linux-raid@vger.kernel.org
Hi Thomas,
On 05/19/2015 07:08 AM, Thomas Fjellstrom wrote:
> Hi,
>
> I have this one drive that dropped out of one of my arrays once. It shows UNC
> errors in SMART (log appended), and Reported_Uncorrect is 5. There are no
> smart test failures or any other SMART values that look spectacularly wrong,
> other than maybe Load_Cycle_Count which is 10625 (these seagates used to
> constantly park and unpark before i updated the firmware).
>
> I'm wondering whether or not this drive is still safe to use. I feel like I
> can't trust it, especially after all the other Seagates I had that failed in
> the past few years. I'm running a tool called whdd on it right now and it
> shows very consistent latency spikes above 150ms. Really, I'm wondering if
> this drive is RMAable as is, or if i have to wait for it to degrade further as
> i have another drive with like 10k reallocated sectors to send in. I have
> already replaced both with WD Red's so I can do whatever tests are needed to
> figure it out.
Based on the smart report, this drive is perfectly healthy. A small
number of uncorrectable read errors is normal in the life of any drive.
It has no relocations, and no pending sectors. The latency spikes are
likely due to slow degradation of some sectors that the drive is having
to internally retry to read successfully. Again, normal.
I own some "DM001" drives -- they are unsuited to raid duty as they
don't support ERC. So, out of the box, they are time bombs for any
array you put them in. That's almost certainly why they were ejected
from your array.
If you absolutely must use them, you *must* set the *driver* timeout to
120 seconds or more.
HTH,
Phil
http://marc.info/?l=linux-raid&m=133761065622164&w=2
http://marc.info/?l=linux-raid&m=135811522817345&w=1
http://marc.info/?l=linux-raid&m=133761065622164&w=2
http://marc.info/?l=linux-raid&m=133665797115876&w=2
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Recent drive errors
2015-05-19 12:34 ` Phil Turmel
@ 2015-05-19 12:50 ` Thomas Fjellstrom
2015-05-19 13:23 ` Phil Turmel
2015-05-21 7:58 ` Mikael Abrahamsson
0 siblings, 2 replies; 13+ messages in thread
From: Thomas Fjellstrom @ 2015-05-19 12:50 UTC (permalink / raw)
To: Phil Turmel; +Cc: linux-raid@vger.kernel.org
On Tue 19 May 2015 08:34:55 AM Phil Turmel wrote:
> Hi Thomas,
>
> On 05/19/2015 07:08 AM, Thomas Fjellstrom wrote:
> > Hi,
> >
> > I have this one drive that dropped out of one of my arrays once. It shows
> > UNC errors in SMART (log appended), and Reported_Uncorrect is 5. There
> > are no smart test failures or any other SMART values that look
> > spectacularly wrong, other than maybe Load_Cycle_Count which is 10625
> > (these seagates used to constantly park and unpark before i updated the
> > firmware).
> >
> > I'm wondering whether or not this drive is still safe to use. I feel like
> > I
> > can't trust it, especially after all the other Seagates I had that failed
> > in the past few years. I'm running a tool called whdd on it right now and
> > it shows very consistent latency spikes above 150ms. Really, I'm
> > wondering if this drive is RMAable as is, or if i have to wait for it to
> > degrade further as i have another drive with like 10k reallocated sectors
> > to send in. I have already replaced both with WD Red's so I can do
> > whatever tests are needed to figure it out.
>
> Based on the smart report, this drive is perfectly healthy. A small
> number of uncorrectable read errors is normal in the life of any drive.
Is it perfectly normal for the same sector to be reported uncorrectable 5
times in a row like it did?
How many UREs are considered "ok"? Tens, hundreds, thousands, tens of
thousands?
These drives have been barely used. Most of their life, they were either off,
or not actually being used. (it took a while to collect enough 3TB drives, and
then find time to build the array, and set it up as a regular backup of my
11TB nas).
> It has no relocations, and no pending sectors. The latency spikes are
> likely due to slow degradation of some sectors that the drive is having
> to internally retry to read successfully. Again, normal.
The latency spikes are /very/ regular and theres quite a lot of them.
See: http://i.imgur.com/QjTl6o3.png
> I own some "DM001" drives -- they are unsuited to raid duty as they
> don't support ERC. So, out of the box, they are time bombs for any
> array you put them in. That's almost certainly why they were ejected
> from your array.
>
> If you absolutely must use them, you *must* set the *driver* timeout to
> 120 seconds or more.
I've been planning on looking into the ERC stuff. I now actually have some
drives that do support ERC, so it'll be interesting to make sure everything is
set up properly.
> HTH,
Thank you :)
> Phil
>
> http://marc.info/?l=linux-raid&m=133761065622164&w=2
> http://marc.info/?l=linux-raid&m=135811522817345&w=1
> http://marc.info/?l=linux-raid&m=133761065622164&w=2
> http://marc.info/?l=linux-raid&m=133665797115876&w=2
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Thomas Fjellstrom
thomas@fjellstrom.ca
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Recent drive errors
2015-05-19 12:50 ` Thomas Fjellstrom
@ 2015-05-19 13:23 ` Phil Turmel
2015-05-19 14:32 ` Thomas Fjellstrom
2015-05-21 7:58 ` Mikael Abrahamsson
1 sibling, 1 reply; 13+ messages in thread
From: Phil Turmel @ 2015-05-19 13:23 UTC (permalink / raw)
To: thomas; +Cc: linux-raid@vger.kernel.org
On 05/19/2015 08:50 AM, Thomas Fjellstrom wrote:
> On Tue 19 May 2015 08:34:55 AM Phil Turmel wrote:
>> Based on the smart report, this drive is perfectly healthy. A small
>> number of uncorrectable read errors is normal in the life of any drive.
>
> Is it perfectly normal for the same sector to be reported uncorrectable 5
> times in a row like it did?
Yes, if you keep trying to read it. Unreadable sectors stay unreadable,
generally, until they are re-written. That's the first opportunity the
drive has to decide if a relocation is necessary.
> How many UREs are considered "ok"? Tens, hundreds, thousands, tens of
> thousands?
Depends. In a properly functioning array that gets scrubbed
occasionally, or sufficiently heavy use to read the entire contents
occasionally, the UREs get rewritten by MD right away. Any UREs then
only show up once.
In a desktop environment, or non-raid, or improperly configured raid,
the UREs will build up, and get reported on every read attempt.
Most consumer-grade drives claim a URE average below 1 per 1E14 bits
read. So by the end of their warranty period, getting one every 12TB
read wouldn't be unusual. This sort of thing follows a Poisson
distribution:
http://marc.info/?l=linux-raid&m=135863964624202&w=2
> These drives have been barely used. Most of their life, they were either off,
> or not actually being used. (it took a while to collect enough 3TB drives, and
> then find time to build the array, and set it up as a regular backup of my
> 11TB nas).
While being off may lengthen their life somewhat, the magnetic domains
on these things are so small that some degradation will happen just
sitting there. Diffusion in the p- and n-doped regions of the
semiconductors is also happening while sitting unused, degrading the
electronics.
>> It has no relocations, and no pending sectors. The latency spikes are
>> likely due to slow degradation of some sectors that the drive is having
>> to internally retry to read successfully. Again, normal.
>
> The latency spikes are /very/ regular and theres quite a lot of them.
> See: http://i.imgur.com/QjTl6o3.png
Interesting. I suspect that if you wipe that disk with noise, read it
all back, and wipe it again, you'll have a handful of relocations.
Your latency test will show different numbers then, as the head will
have to seek to the spare sector and back whenever you read through one
of those spots.
Or the rewrites will fix them all, and you'll have no further problems.
Hard to tell. Bottom line is that drives can't fix any problems they
have unless they are *written* in previously identified problem areas.
>> I own some "DM001" drives -- they are unsuited to raid duty as they
>> don't support ERC. So, out of the box, they are time bombs for any
>> array you put them in. That's almost certainly why they were ejected
>> from your array.
>>
>> If you absolutely must use them, you *must* set the *driver* timeout to
>> 120 seconds or more.
>
> I've been planning on looking into the ERC stuff. I now actually have some
> drives that do support ERC, so it'll be interesting to make sure everything is
> set up properly.
You have it backwards. If you have WD Reds, they are correct out of the
box. It's when you *don't* have ERC support, or you only have desktop
ERC, that you need to take special action.
If you have consumer grade drives in a raid array, and you don't have
boot scripts or udev rules to deal with timeout mismatch, your *ss is
hanging in the wind. The links in my last msg should help you out.
Also, I noticed that you used "smartctl -a" to post a complete report of
your drive's status. It's not complete. You should get in the habit of
using "smartctl -x" instead, so you see the ERC status, too.
Phil
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Recent drive errors
2015-05-19 13:23 ` Phil Turmel
@ 2015-05-19 14:32 ` Thomas Fjellstrom
2015-05-19 14:51 ` Phil Turmel
0 siblings, 1 reply; 13+ messages in thread
From: Thomas Fjellstrom @ 2015-05-19 14:32 UTC (permalink / raw)
To: Phil Turmel; +Cc: linux-raid@vger.kernel.org
On Tue 19 May 2015 09:23:20 AM Phil Turmel wrote:
> On 05/19/2015 08:50 AM, Thomas Fjellstrom wrote:
> > On Tue 19 May 2015 08:34:55 AM Phil Turmel wrote:
> >> Based on the smart report, this drive is perfectly healthy. A small
> >> number of uncorrectable read errors is normal in the life of any drive.
> >
> > Is it perfectly normal for the same sector to be reported uncorrectable 5
> > times in a row like it did?
>
> Yes, if you keep trying to read it. Unreadable sectors stay unreadable,
> generally, until they are re-written. That's the first opportunity the
> drive has to decide if a relocation is necessary.
>
> > How many UREs are considered "ok"? Tens, hundreds, thousands, tens of
> > thousands?
>
> Depends. In a properly functioning array that gets scrubbed
> occasionally, or sufficiently heavy use to read the entire contents
> occasionally, the UREs get rewritten by MD right away. Any UREs then
> only show up once.
I have made sure that it's doing regular scrubs, and regular SMART scans. This
time...
> In a desktop environment, or non-raid, or improperly configured raid,
> the UREs will build up, and get reported on every read attempt.
>
> Most consumer-grade drives claim a URE average below 1 per 1E14 bits
> read. So by the end of their warranty period, getting one every 12TB
> read wouldn't be unusual. This sort of thing follows a Poisson
> distribution:
>
> http://marc.info/?l=linux-raid&m=135863964624202&w=2
>
> > These drives have been barely used. Most of their life, they were either
> > off, or not actually being used. (it took a while to collect enough 3TB
> > drives, and then find time to build the array, and set it up as a regular
> > backup of my 11TB nas).
>
> While being off may lengthen their life somewhat, the magnetic domains
> on these things are so small that some degradation will happen just
> sitting there. Diffusion in the p- and n-doped regions of the
> semiconductors is also happening while sitting unused, degrading the
> electronics.
>
> >> It has no relocations, and no pending sectors. The latency spikes are
> >>
> >> likely due to slow degradation of some sectors that the drive is having
> >> to internally retry to read successfully. Again, normal.
> >
> > The latency spikes are /very/ regular and theres quite a lot of them.
> > See: http://i.imgur.com/QjTl6o3.png
>
> Interesting. I suspect that if you wipe that disk with noise, read it
> all back, and wipe it again, you'll have a handful of relocations.
It looks like each one of the blocks in that display is 128KiB. Which i think
means those red blocks aren't very far apart. Maybe 80MiB apart? Would it
reallocate all of those? That'd be a lot of reallocated sectors.
> Your latency test will show different numbers then, as the head will
> have to seek to the spare sector and back whenever you read through one
> of those spots.
>
> Or the rewrites will fix them all, and you'll have no further problems.
> Hard to tell. Bottom line is that drives can't fix any problems they
> have unless they are *written* in previously identified problem areas.
>
> >> I own some "DM001" drives -- they are unsuited to raid duty as they
> >> don't support ERC. So, out of the box, they are time bombs for any
> >> array you put them in. That's almost certainly why they were ejected
> >> from your array.
> >>
> >> If you absolutely must use them, you *must* set the *driver* timeout to
> >> 120 seconds or more.
> >
> > I've been planning on looking into the ERC stuff. I now actually have some
> > drives that do support ERC, so it'll be interesting to make sure
> > everything is set up properly.
>
> You have it backwards. If you have WD Reds, they are correct out of the
> box. It's when you *don't* have ERC support, or you only have desktop
> ERC, that you need to take special action.
I was under the impression you still had to enable ERC on boot. And I
/thought/ I read that you still want to adjust the timeouts, though not the
same as for consumer drives.
> If you have consumer grade drives in a raid array, and you don't have
> boot scripts or udev rules to deal with timeout mismatch, your *ss is
> hanging in the wind. The links in my last msg should help you out.
There was some talk of ERC/TLER and md. I'll still have to find or write a
script to properly set up timeouts and enable TLER on drives capable of it
(that don't come with it enabled by default).
> Also, I noticed that you used "smartctl -a" to post a complete report of
> your drive's status. It's not complete. You should get in the habit of
> using "smartctl -x" instead, so you see the ERC status, too.
Good to know. Thanks.
> Phil
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Thomas Fjellstrom
thomas@fjellstrom.ca
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Recent drive errors
2015-05-19 14:32 ` Thomas Fjellstrom
@ 2015-05-19 14:51 ` Phil Turmel
2015-05-19 16:07 ` Thomas Fjellstrom
0 siblings, 1 reply; 13+ messages in thread
From: Phil Turmel @ 2015-05-19 14:51 UTC (permalink / raw)
To: thomas; +Cc: linux-raid@vger.kernel.org
On 05/19/2015 10:32 AM, Thomas Fjellstrom wrote:
> On Tue 19 May 2015 09:23:20 AM Phil Turmel wrote:
>> Depends. In a properly functioning array that gets scrubbed
>> occasionally, or sufficiently heavy use to read the entire contents
>> occasionally, the UREs get rewritten by MD right away. Any UREs then
>> only show up once.
>
> I have made sure that it's doing regular scrubs, and regular SMART scans. This
> time...
Yes, and this drive was kicked out. Because it wouldn't be listening
when MD tried to write over the error it found.
I posted this link earlier, but it is particularly relevant:
http://marc.info/?l=linux-raid&m=133665797115876&w=2
>> Interesting. I suspect that if you wipe that disk with noise, read it
>> all back, and wipe it again, you'll have a handful of relocations.
>
> It looks like each one of the blocks in that display is 128KiB. Which i think
> means those red blocks aren't very far apart. Maybe 80MiB apart? Would it
> reallocate all of those? That'd be a lot of reallocated sectors.
Drives will only reallocate where a previous read failed (making it
pending), then write and follow-up verification fails. In general,
writes are unverified at the time of write (or your write performance
would be dramatically slower than read).
>> You have it backwards. If you have WD Reds, they are correct out of the
>> box. It's when you *don't* have ERC support, or you only have desktop
>> ERC, that you need to take special action.
>
> I was under the impression you still had to enable ERC on boot. And I
> /thought/ I read that you still want to adjust the timeouts, though not the
> same as for consumer drives.
Desktop / consumer drives that support ERC typically ship with it
disabled, so they behave just like drives that don't support it at all.
So a boot script would enable ERC on drives where it can (and not
already OK), and set long driver timeouts on the rest.
Any drive that claims "raid" compatibility will have ERC enabled by
default. Typically 7.0 seconds. WD Reds do. Enterprise drives do, and
have better URE specs, too.
>> If you have consumer grade drives in a raid array, and you don't have
>> boot scripts or udev rules to deal with timeout mismatch, your *ss is
>> hanging in the wind. The links in my last msg should help you out.
>
> There was some talk of ERC/TLER and md. I'll still have to find or write a
> script to properly set up timeouts and enable TLER on drives capable of it
> (that don't come with it enabled by default).
Before I got everything onto proper drives, I just put what I needed
into rc.local.
Chris Murphy posted some udev rules that will likely work for you. I
haven't tried them myself, though.
https://www.marc.info/?l=linux-raid&m=142487508806844&w=3
Phil
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Recent drive errors
2015-05-19 14:51 ` Phil Turmel
@ 2015-05-19 16:07 ` Thomas Fjellstrom
2015-05-20 5:38 ` Thomas Fjellstrom
0 siblings, 1 reply; 13+ messages in thread
From: Thomas Fjellstrom @ 2015-05-19 16:07 UTC (permalink / raw)
To: Phil Turmel; +Cc: linux-raid@vger.kernel.org
On Tue 19 May 2015 10:51:59 AM you wrote:
> On 05/19/2015 10:32 AM, Thomas Fjellstrom wrote:
> > On Tue 19 May 2015 09:23:20 AM Phil Turmel wrote:
> >> Depends. In a properly functioning array that gets scrubbed
> >> occasionally, or sufficiently heavy use to read the entire contents
> >> occasionally, the UREs get rewritten by MD right away. Any UREs then
> >> only show up once.
> >
> > I have made sure that it's doing regular scrubs, and regular SMART scans.
> > This time...
>
> Yes, and this drive was kicked out. Because it wouldn't be listening
> when MD tried to write over the error it found.
I didn't actually re-install this drive after the last time it was kicked out,
which was when i didn't have regular scrubs (actually, it may have been, as it
was probably the only thing that would cause activity on that array for many
many months) or smart tests set up. I noticed the high start stop count, and
the 5 errors, and decided to keep it out of the new array. I seem to recall
one or more drives having suspiciously high start stop counts that then went
on to fail, but it seems that isn't true, one of them is still in use (64k
start stop events apparently, or it maxed out the counter).
Basically, I had an unused array of 3TB seagates, it sat doing virtually
nothing but spinning its platters for quite a long time due to lack of time on
my part, and some time between last summer and winter, it kicked out two
drives all on its own. It was probably the monthly scrub. After I got back
from a three month long trip, I rebuilt that array (and my main NAS, which
also kicked out two drives... But thats a story for another time) with four of
the old Seagates, and one new WD red. It was this drive that I removed at that
time because it looked suspicious. Sadly, about a month or two later, a second
drive got kicked out and was unambiguously faulty (thousands, if not 10k+
reallocated sectors), so I replaced it with two new WD Reds, and reshaped to a
raid6. After that, I just decided to re-check the first drive to drop out,
just to be safe, and here we are...
I'm running a badblocks -w on the drive as we speak, it'll probably be done in
a day or two. We'll see if it changes anything. It's not exactly writing
noise, but it aught to do the trick.
> I posted this link earlier, but it is particularly relevant:
> http://marc.info/?l=linux-raid&m=133665797115876&w=2
>
> >> Interesting. I suspect that if you wipe that disk with noise, read it
> >> all back, and wipe it again, you'll have a handful of relocations.
> >
> > It looks like each one of the blocks in that display is 128KiB. Which i
> > think means those red blocks aren't very far apart. Maybe 80MiB apart?
> > Would it reallocate all of those? That'd be a lot of reallocated sectors.
>
> Drives will only reallocate where a previous read failed (making it
> pending), then write and follow-up verification fails. In general,
> writes are unverified at the time of write (or your write performance
> would be dramatically slower than read).
Right. I was just thinking about how you mentioned that I'd get a handful of
reallocations based on the latency shown in the image I posted. It's a lot of
sectors that seem to be affected by the latency spikes, so I assumed (probably
wrongly) that many of them may be reallocated afterwards.
If this drive ends up not reallocating a single sector, or only a few, I may
just keep it around as a hot spare, though i feel that's not the best idea, if
it is degrading, then when it actually goes to use that disk it has a higher
chance of failing.
> >> You have it backwards. If you have WD Reds, they are correct out of the
> >> box. It's when you *don't* have ERC support, or you only have desktop
> >> ERC, that you need to take special action.
> >
> > I was under the impression you still had to enable ERC on boot. And I
> > /thought/ I read that you still want to adjust the timeouts, though not
> > the
> > same as for consumer drives.
>
> Desktop / consumer drives that support ERC typically ship with it
> disabled, so they behave just like drives that don't support it at all.
> So a boot script would enable ERC on drives where it can (and not
> already OK), and set long driver timeouts on the rest.
>
> Any drive that claims "raid" compatibility will have ERC enabled by
> default. Typically 7.0 seconds. WD Reds do. Enterprise drives do, and
> have better URE specs, too.
Good to know.
> >> If you have consumer grade drives in a raid array, and you don't have
> >> boot scripts or udev rules to deal with timeout mismatch, your *ss is
> >> hanging in the wind. The links in my last msg should help you out.
> >
> > There was some talk of ERC/TLER and md. I'll still have to find or write a
> > script to properly set up timeouts and enable TLER on drives capable of it
> > (that don't come with it enabled by default).
>
> Before I got everything onto proper drives, I just put what I needed
> into rc.local.
It's going to be a long time before I can swap out the rest of the seagates. I
just can't justify the cost atm, especially as its the backup for my main nas
which used to be all 2TB seagates, but has since been retrofitted with two WD
reds as two had thousands of reallocated sectors, funny thing is one of the
seagates had already been replaced prior to that, so 3 out of 5 of the
original setup failed. And before that, at least two 1TB seagates have failed
on me (out of 7ish), I think maybe one 640G one went, and a couple 320s went.
I won't blame the two 80s that I had that failed on seagate though, that was a
power supply fault. Took out two drives out of five (total), memory, and made
the motherboard a bit flaky.
Just a little bit jaded when it comes to seagates these days, but i still
can't just up and swap them all out, even if its a good idea. If I had the
money, I wouldn't mind just replacing them all with enterprise/nearline or NAS
drives, and slap the seagates in a big zfs pool or something for some scratch
space or just sell them...
> Chris Murphy posted some udev rules that will likely work for you. I
> haven't tried them myself, though.
>
> https://www.marc.info/?l=linux-raid&m=142487508806844&w=3
Thanks :)
> Phil
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Thomas Fjellstrom
thomas@fjellstrom.ca
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Recent drive errors
2015-05-19 16:07 ` Thomas Fjellstrom
@ 2015-05-20 5:38 ` Thomas Fjellstrom
0 siblings, 0 replies; 13+ messages in thread
From: Thomas Fjellstrom @ 2015-05-20 5:38 UTC (permalink / raw)
To: Phil Turmel; +Cc: linux-raid@vger.kernel.org
On Tue 19 May 2015 10:07:49 AM Thomas Fjellstrom wrote:
> On Tue 19 May 2015 10:51:59 AM you wrote:
> > On 05/19/2015 10:32 AM, Thomas Fjellstrom wrote:
> > > On Tue 19 May 2015 09:23:20 AM Phil Turmel wrote:
> > >> Depends. In a properly functioning array that gets scrubbed
> > >> occasionally, or sufficiently heavy use to read the entire contents
> > >> occasionally, the UREs get rewritten by MD right away. Any UREs then
> > >> only show up once.
> > >
> > > I have made sure that it's doing regular scrubs, and regular SMART
> > > scans.
> > > This time...
> >
> > Yes, and this drive was kicked out. Because it wouldn't be listening
> > when MD tried to write over the error it found.
>
[snip]
>
> > I posted this link earlier, but it is particularly relevant:
> > http://marc.info/?l=linux-raid&m=133665797115876&w=2
> >
> > >> Interesting. I suspect that if you wipe that disk with noise, read it
> > >> all back, and wipe it again, you'll have a handful of relocations.
> > >
> > > It looks like each one of the blocks in that display is 128KiB. Which i
> > > think means those red blocks aren't very far apart. Maybe 80MiB apart?
> > > Would it reallocate all of those? That'd be a lot of reallocated
> > > sectors.
> >
> > Drives will only reallocate where a previous read failed (making it
> > pending), then write and follow-up verification fails. In general,
> > writes are unverified at the time of write (or your write performance
> > would be dramatically slower than read).
>
> Right. I was just thinking about how you mentioned that I'd get a handful of
> reallocations based on the latency shown in the image I posted. It's a lot
> of sectors that seem to be affected by the latency spikes, so I assumed
> (probably wrongly) that many of them may be reallocated afterwards.
>
> If this drive ends up not reallocating a single sector, or only a few, I may
> just keep it around as a hot spare, though i feel that's not the best idea,
> if it is degrading, then when it actually goes to use that disk it has a
> higher chance of failing.
Well here's something:
[78447.747221] sd 0:0:15:0: [sdf] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[78447.749092] sd 0:0:15:0: [sdf] Sense Key : Medium Error [current]
[78447.751034] sd 0:0:15:0: [sdf] Add. Sense: Unrecovered read error
[78447.752925] sd 0:0:15:0: [sdf] CDB: Read(16) 88 00 00 00 00 00 ef 7a 0f b0 00 00 00 08 00 00
[78447.754746] blk_update_request: critical medium error, dev sdf, sector 4017754032
[78447.756700] Buffer I/O error on dev sdf, logical block 502219254, async page read
<many many more of the above>
5 Reallocated_Sector_Ct PO--CK 087 087 036 - 17232
187 Reported_Uncorrect -O--CK 001 001 000 - 8236
197 Current_Pending_Sector -O--C- 024 024 000 - 12584
198 Offline_Uncorrectable ----C- 024 024 000 - 12584
Badblocks is showing a bunch of errors now, and the above is what's in dmesg and smartctl.
So I guess it was dead after all.
> > >> You have it backwards. If you have WD Reds, they are correct out of
> > >> the
> > >> box. It's when you *don't* have ERC support, or you only have desktop
> > >> ERC, that you need to take special action.
> > >
> > > I was under the impression you still had to enable ERC on boot. And I
> > > /thought/ I read that you still want to adjust the timeouts, though not
> > > the
> > > same as for consumer drives.
> >
> > Desktop / consumer drives that support ERC typically ship with it
> > disabled, so they behave just like drives that don't support it at all.
> >
> > So a boot script would enable ERC on drives where it can (and not
> >
> > already OK), and set long driver timeouts on the rest.
> >
> > Any drive that claims "raid" compatibility will have ERC enabled by
> > default. Typically 7.0 seconds. WD Reds do. Enterprise drives do, and
> > have better URE specs, too.
>
> Good to know.
>
> > >> If you have consumer grade drives in a raid array, and you don't have
> > >> boot scripts or udev rules to deal with timeout mismatch, your *ss is
> > >> hanging in the wind. The links in my last msg should help you out.
> > >
> > > There was some talk of ERC/TLER and md. I'll still have to find or write
> > > a
> > > script to properly set up timeouts and enable TLER on drives capable of
> > > it
> > > (that don't come with it enabled by default).
> >
> > Before I got everything onto proper drives, I just put what I needed
> > into rc.local.
>
[snip]
>
> > Chris Murphy posted some udev rules that will likely work for you. I
> > haven't tried them myself, though.
> >
> > https://www.marc.info/?l=linux-raid&m=142487508806844&w=3
>
> Thanks :)
>
> > Phil
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Thomas Fjellstrom
thomas@fjellstrom.ca
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Recent drive errors
2015-05-19 12:50 ` Thomas Fjellstrom
2015-05-19 13:23 ` Phil Turmel
@ 2015-05-21 7:58 ` Mikael Abrahamsson
2015-05-21 12:45 ` Thomas Fjellstrom
2015-05-22 7:07 ` Weedy
1 sibling, 2 replies; 13+ messages in thread
From: Mikael Abrahamsson @ 2015-05-21 7:58 UTC (permalink / raw)
To: Thomas Fjellstrom; +Cc: Phil Turmel, linux-raid@vger.kernel.org
On Tue, 19 May 2015, Thomas Fjellstrom wrote:
> How many UREs are considered "ok"? Tens, hundreds, thousands, tens of
> thousands?
I will replace any drive that have developed UNC sectors a few times, so
I'd say "less than 10".
+1 on the "set kernel timeout to more than 120 seconds". I have this in
/etc/rc.local:
for x in /sys/block/sd[a-z] ; do
echo 180 > $x/device/timeout
done
echo 4096 > /sys/block/md0/md/stripe_cache_size
--
Mikael Abrahamsson email: swmike@swm.pp.se
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Recent drive errors
2015-05-21 7:58 ` Mikael Abrahamsson
@ 2015-05-21 12:45 ` Thomas Fjellstrom
2015-05-22 13:38 ` Mikael Abrahamsson
2015-05-22 7:07 ` Weedy
1 sibling, 1 reply; 13+ messages in thread
From: Thomas Fjellstrom @ 2015-05-21 12:45 UTC (permalink / raw)
To: Mikael Abrahamsson; +Cc: Phil Turmel, linux-raid@vger.kernel.org
On Thu 21 May 2015 09:58:48 AM Mikael Abrahamsson wrote:
> On Tue, 19 May 2015, Thomas Fjellstrom wrote:
> > How many UREs are considered "ok"? Tens, hundreds, thousands, tens of
> > thousands?
>
> I will replace any drive that have developed UNC sectors a few times, so
> I'd say "less than 10".
In this case, it looked like 5 UNC errors for a single sector, and some weird
latency patterns, till I ran badblocks -w on it, then it gave me > 10k
relocated sectors and many thousands more uncorrectable sectors. Before the
badblocks test, it "looked" ok, now It's most definitely dead.
> +1 on the "set kernel timeout to more than 120 seconds". I have this in
> /etc/rc.local:
>
> for x in /sys/block/sd[a-z] ; do
> echo 180 > $x/device/timeout
> done
>
> echo 4096 > /sys/block/md0/md/stripe_cache_size
I presume it's ok to do that even if the drives do ERC/TLER? Just woke up, but
my brain seems to be telling me it shouldn't break anything since the ERC
drives should always return after 7s no matter what...
--
Thomas Fjellstrom
thomas@fjellstrom.ca
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Recent drive errors
2015-05-21 7:58 ` Mikael Abrahamsson
2015-05-21 12:45 ` Thomas Fjellstrom
@ 2015-05-22 7:07 ` Weedy
1 sibling, 0 replies; 13+ messages in thread
From: Weedy @ 2015-05-22 7:07 UTC (permalink / raw)
To: Mikael Abrahamsson
Cc: Thomas Fjellstrom, Phil Turmel, linux-raid@vger.kernel.org
On Thu, May 21, 2015 at 3:58 AM, Mikael Abrahamsson <swmike@swm.pp.se> wrote:
> On Tue, 19 May 2015, Thomas Fjellstrom wrote:
>
>> How many UREs are considered "ok"? Tens, hundreds, thousands, tens of
>> thousands?
>
>
> I will replace any drive that have developed UNC sectors a few times, so I'd
> say "less than 10".
>
> +1 on the "set kernel timeout to more than 120 seconds". I have this in
> /etc/rc.local:
>
> for x in /sys/block/sd[a-z] ; do
> echo 180 > $x/device/timeout
> done
>
> echo 4096 > /sys/block/md0/md/stripe_cache_size
#!/bin/sh
if [ -e /proc/sys/dev/raid/speed_limit_min ]; then
echo $((1024*4)) > /proc/sys/dev/raid/speed_limit_min 2> /dev/null
echo $((1024*24)) > /proc/sys/dev/raid/speed_limit_max 2> /dev/null
fi
if [ -e /dev/md0 -o -d /dev/md ]; then
for md in /dev/md*; do
echo $((1024*4)) > /sys/block/${md#/dev/}/md/stripe_cache_size
blockdev --setra $((1024*4)) /dev/${md#/dev/}
done
for disk in /sys/block/sd*; do
echo 45 > $disk/device/timeout
smartctl -l scterc,70,70 /dev/${disk#/sys/block/}
done
fi
for disk in /sys/block/sd*; do
echo cfq > $disk/queue/scheduler
echo 768 > $disk/queue/read_ahead_kb
echo 256 > $disk/queue/nr_requests
echo 8 2>/dev/null > $disk/device/queue_depth
done
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Recent drive errors
2015-05-21 12:45 ` Thomas Fjellstrom
@ 2015-05-22 13:38 ` Mikael Abrahamsson
2015-05-22 14:19 ` Thomas Fjellstrom
0 siblings, 1 reply; 13+ messages in thread
From: Mikael Abrahamsson @ 2015-05-22 13:38 UTC (permalink / raw)
To: Thomas Fjellstrom; +Cc: Phil Turmel, linux-raid@vger.kernel.org
On Thu, 21 May 2015, Thomas Fjellstrom wrote:
>> for x in /sys/block/sd[a-z] ; do
>> echo 180 > $x/device/timeout
>> done
>
> I presume it's ok to do that even if the drives do ERC/TLER? Just woke up, but
> my brain seems to be telling me it shouldn't break anything since the ERC
> drives should always return after 7s no matter what...
Correct, the only downside is that if the drive really dies, it's going to
take longer to detect this.
I'd rather have longer timeouts to make sure drives never kicked out
because of lack of ERC or something else, than to have drives kicked for
some reason (controller reset, lack of ERC, or something else).
I really really want to avoid drives being kicked for any other reason
than them really being dead. I'd rather have reads stalled for a few
minutes than this happening. For other use-cases, the requirements may be
different.
--
Mikael Abrahamsson email: swmike@swm.pp.se
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: Recent drive errors
2015-05-22 13:38 ` Mikael Abrahamsson
@ 2015-05-22 14:19 ` Thomas Fjellstrom
0 siblings, 0 replies; 13+ messages in thread
From: Thomas Fjellstrom @ 2015-05-22 14:19 UTC (permalink / raw)
To: Mikael Abrahamsson; +Cc: Phil Turmel, linux-raid@vger.kernel.org
On Fri 22 May 2015 03:38:06 PM Mikael Abrahamsson wrote:
> On Thu, 21 May 2015, Thomas Fjellstrom wrote:
> >> for x in /sys/block/sd[a-z] ; do
> >>
> >> echo 180 > $x/device/timeout
> >>
> >> done
> >
> > I presume it's ok to do that even if the drives do ERC/TLER? Just woke up,
> > but my brain seems to be telling me it shouldn't break anything since the
> > ERC drives should always return after 7s no matter what...
>
> Correct, the only downside is that if the drive really dies, it's going to
> take longer to detect this.
>
> I'd rather have longer timeouts to make sure drives never kicked out
> because of lack of ERC or something else, than to have drives kicked for
> some reason (controller reset, lack of ERC, or something else).
>
> I really really want to avoid drives being kicked for any other reason
> than them really being dead. I'd rather have reads stalled for a few
> minutes than this happening. For other use-cases, the requirements may be
> different.
Yeah, I agree. Especially given this is just my home nas and nas backup setup,
It's better to just stall rather than die horribly.
--
Thomas Fjellstrom
thomas@fjellstrom.ca
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2015-05-22 14:19 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-05-19 11:08 Recent drive errors Thomas Fjellstrom
2015-05-19 12:34 ` Phil Turmel
2015-05-19 12:50 ` Thomas Fjellstrom
2015-05-19 13:23 ` Phil Turmel
2015-05-19 14:32 ` Thomas Fjellstrom
2015-05-19 14:51 ` Phil Turmel
2015-05-19 16:07 ` Thomas Fjellstrom
2015-05-20 5:38 ` Thomas Fjellstrom
2015-05-21 7:58 ` Mikael Abrahamsson
2015-05-21 12:45 ` Thomas Fjellstrom
2015-05-22 13:38 ` Mikael Abrahamsson
2015-05-22 14:19 ` Thomas Fjellstrom
2015-05-22 7:07 ` Weedy
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).