* Wiki-recovering failed raid, overlay problem
@ 2013-06-01 6:23 Chris Finley
2013-06-01 23:30 ` Phil Turmel
0 siblings, 1 reply; 9+ messages in thread
From: Chris Finley @ 2013-06-01 6:23 UTC (permalink / raw)
To: linux-raid
I am trying to recover a failed Raid 5 array by following the guide at
https://raid.wiki.kernel.org/index.php/Recovering_a_failed_software_RAID
Things go fine until I get to the command under "Setup the loop-device
and the overlay device:"
parallel 'size=$(blockdev --getsize {}); loop=$(losetup -f --show --
overlay-{/}); echo 0 $size snapshot {} $loop P 8 | dmsetup create {/}'
::: $DEVICES
This command gets me:
device-mapper: reload ioctl failed: Device or resource busy
Command failed
device-mapper: reload ioctl failed: Device or resource busy
Command failed
device-mapper: reload ioctl failed: Device or resource busy
Command failed
device-mapper: reload ioctl failed: Device or resource busy
Command failed
The drives are not mounted. I am booting to a system on sda. I tried
this in single-user mode with the same result. I tried searching for
dmsetup help without luck.
Any advise on the cause of this error would be greatly appreciated.
The overlays are created in my current directory at 2.1TB each:
-rw-r--r-- 1 root root 2.1T May 30 21:23 overlay-sdb1
-rw-r--r-- 1 root root 2.1T May 30 21:23 overlay-sdc1
-rw-r--r-- 1 root root 2.1T May 30 21:23 overlay-sdd1
-rw-r--r-- 1 root root 2.1T May 30 21:23 overlay-sde1
The loop devices appear to be created:
root@mythserver:~# losetup -a
/dev/loop0: [0807]:58851784 (/root/overlay-sdb1)
/dev/loop1: [0807]:58851786 (/root/overlay-sdc1)
/dev/loop2: [0807]:58851787 (/root/overlay-sdd1)
/dev/loop3: [0807]:58851792 (/root/overlay-sde1)
These are the entries that are piped into 'dmsetup create {/}':
0 3907024002 snapshot /dev/sdb1 /dev/loop0 P 8
0 3907024002 snapshot /dev/sdc1 /dev/loop1 P 8
0 3907024002 snapshot /dev/sdd1 /dev/loop2 P 8
0 3907024002 snapshot /dev/sde1 /dev/loop3 P 8
Nothing has been created in /dev/mapper/
root@mythserver:~# l /dev/mapper/
total 0
crw------- 1 root root 10, 236 May 30 23:55 control
Again, thanks!
Chris
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Wiki-recovering failed raid, overlay problem
2013-06-01 6:23 Wiki-recovering failed raid, overlay problem Chris Finley
@ 2013-06-01 23:30 ` Phil Turmel
2013-06-02 0:40 ` Chris Finley
0 siblings, 1 reply; 9+ messages in thread
From: Phil Turmel @ 2013-06-01 23:30 UTC (permalink / raw)
To: Chris Finley; +Cc: linux-raid
Hi Chris,
On 06/01/2013 02:23 AM, Chris Finley wrote:
> I am trying to recover a failed Raid 5 array by following the guide at
> https://raid.wiki.kernel.org/index.php/Recovering_a_failed_software_RAID
Stop. Report the *critical* details of your setup. At least:
1) "mdadm -E /dev/sdXX" for every member device of the raid.
2) "dmesg" or suitable portions of syslog, showing the last attempted
assembly, the first failed assembly, the failure events that started
your saga, and the last pre-failure assembly.
3) an account of all "mdadm" commands you've already used and their results.
4) an account of any other operations you've performed that might have
written to the member disks.
> Things go fine until I get to the command under "Setup the loop-device
> and the overlay device:"
>
> parallel 'size=$(blockdev --getsize {}); loop=$(losetup -f --show --
> overlay-{/}); echo 0 $size snapshot {} $loop P 8 | dmsetup create {/}'
> ::: $DEVICES
>
> This command gets me:
> device-mapper: reload ioctl failed: Device or resource busy
> Command failed
> device-mapper: reload ioctl failed: Device or resource busy
> Command failed
> device-mapper: reload ioctl failed: Device or resource busy
> Command failed
> device-mapper: reload ioctl failed: Device or resource busy
> Command failed
Your array is probably still partially assembled. The wiki is lame.
This mailing list is the right place to get help. (I'm rather biased
against wikis for this sort of thing, but that's off-topic.)
> The drives are not mounted. I am booting to a system on sda. I tried
> this in single-user mode with the same result. I tried searching for
> dmsetup help without luck.
>
> Any advise on the cause of this error would be greatly appreciated.
>
> The overlays are created in my current directory at 2.1TB each:
> -rw-r--r-- 1 root root 2.1T May 30 21:23 overlay-sdb1
> -rw-r--r-- 1 root root 2.1T May 30 21:23 overlay-sdc1
> -rw-r--r-- 1 root root 2.1T May 30 21:23 overlay-sdd1
> -rw-r--r-- 1 root root 2.1T May 30 21:23 overlay-sde1
>
> The loop devices appear to be created:
> root@mythserver:~# losetup -a
> /dev/loop0: [0807]:58851784 (/root/overlay-sdb1)
> /dev/loop1: [0807]:58851786 (/root/overlay-sdc1)
> /dev/loop2: [0807]:58851787 (/root/overlay-sdd1)
> /dev/loop3: [0807]:58851792 (/root/overlay-sde1)
>
> These are the entries that are piped into 'dmsetup create {/}':
> 0 3907024002 snapshot /dev/sdb1 /dev/loop0 P 8
> 0 3907024002 snapshot /dev/sdc1 /dev/loop1 P 8
> 0 3907024002 snapshot /dev/sdd1 /dev/loop2 P 8
> 0 3907024002 snapshot /dev/sde1 /dev/loop3 P 8
>
> Nothing has been created in /dev/mapper/
> root@mythserver:~# l /dev/mapper/
> total 0
> crw------- 1 root root 10, 236 May 30 23:55 control
These exercises to make overlays are rarely needed, and don't appear to
have been created as intended.
Please just round up the data requested and report back. (Paste text
inline, or use plain text attachments, please.)
We may want more data later (like smartctl reports), but items #1-#4 are
needed now.
Phil
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Wiki-recovering failed raid, overlay problem
2013-06-01 23:30 ` Phil Turmel
@ 2013-06-02 0:40 ` Chris Finley
2013-06-02 1:32 ` Phil Turmel
0 siblings, 1 reply; 9+ messages in thread
From: Chris Finley @ 2013-06-02 0:40 UTC (permalink / raw)
Cc: linux-raid
[-- Attachment #1: Type: text/plain, Size: 5771 bytes --]
On Sat, Jun 1, 2013 at 4:30 PM, Phil Turmel <philip@turmel.org> wrote:
> Hi Chris,
>
> On 06/01/2013 02:23 AM, Chris Finley wrote:
>> I am trying to recover a failed Raid 5 array by following the guide at
>> https://raid.wiki.kernel.org/index.php/Recovering_a_failed_software_RAID
>
> Stop. Report the *critical* details of your setup. At least:
Thank you for the reply.
Oh, yes. I'm the guy from an earlier post:
http://marc.info/?l=linux-raid&m=136840333618808&w=2
The pastebins include mdadm -E and smartmontools output. I included
them as attachments at your request.
Two drives dropped out of the RAID 5 for what appears to be read
errors. One partition (sde1) had a much lower event count. I was going
to try forced assembly with the data from the other three drives (sdc,
sdd and sdf).
I have not tried to force assembly yet.
sdd had quite a few read errors, so used ddrescue to copy the data
from sdd to sdb. Thus, My new failed set would be sdb, sdc, sde. The
low event count fourth drive is now sdd.
>
> 1) "mdadm -E /dev/sdXX" for every member device of the raid.
attached: raid.status.txt
This is before attempting anything on the wiki.
> 2) "dmesg" or suitable portions of syslog, showing the last attempted
> assembly, the first failed assembly, the failure events that started
> your saga, and the last pre-failure assembly.
Not much from the last boot, but I do see ioctl in there.
[ 0.218206] pnp 00:02: [dma 4]
[ 0.570352] device-mapper: ioctl: 4.22.0-ioctl (2011-10-19)
initialised: dm-devel@redhat.com
[ 2.712503] device-mapper: dm-raid45: initialized v0.2594b
[ 2.715231] md: linear personality registered for level -1
[ 2.717072] md: multipath personality registered for level -4
[ 2.722254] md: raid0 personality registered for level 0
[ 2.727948] md: raid1 personality registered for level 1
[ 2.861915] md: bind<sdb1>
[ 11.676186] md: raid6 personality registered for level 6
[ 11.676188] md: raid5 personality registered for level 5
[ 11.676189] md: raid4 personality registered for level 4
[ 11.684692] md: raid10 personality registered for level 10
[ 11.842658] md: bind<sde1>
[ 11.844580] md: bind<sdc1>
[ 11.891707] md: bind<sdd1>
> 3) an account of all "mdadm" commands you've already used and their results.
only:
mdadm -E
mdadm --stop /dev/md0
> 4) an account of any other operations you've performed that might have
> written to the member disks.
bad-blocks -v on the last drive to drop out of the raid (sdd)
Then ddrescue to move the data from sdd to sdb.
>
>> Things go fine until I get to the command under "Setup the loop-device
>> and the overlay device:"
>>
>> parallel 'size=$(blockdev --getsize {}); loop=$(losetup -f --show --
>> overlay-{/}); echo 0 $size snapshot {} $loop P 8 | dmsetup create {/}'
>> ::: $DEVICES
>>
>> This command gets me:
>> device-mapper: reload ioctl failed: Device or resource busy
>> Command failed
>> device-mapper: reload ioctl failed: Device or resource busy
>> Command failed
>> device-mapper: reload ioctl failed: Device or resource busy
>> Command failed
>> device-mapper: reload ioctl failed: Device or resource busy
>> Command failed
>
> Your array is probably still partially assembled. The wiki is lame.
> This mailing list is the right place to get help. (I'm rather biased
> against wikis for this sort of thing, but that's off-topic.)
>
You are probably right. After rebooting for several of the steps
(including drive addition/removal) it appears the OS tried to
reassemble the array. It does tell you that there is a failed array
and asks if you'd like to attempt to start the array anyway. It
appears that answering "No" still gets a partially assembled array. I
have stopped the array, but I'll wait for advise before attempting the
overlay again.
# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
[raid4] [raid10]
md0 : inactive sdd1[2](S) sdc1[0](S) sde1[3](S) sdb1[1](S)
7814047744 blocks
>> The drives are not mounted. I am booting to a system on sda. I tried
>> this in single-user mode with the same result. I tried searching for
>> dmsetup help without luck.
>>
>> Any advise on the cause of this error would be greatly appreciated.
>>
>> The overlays are created in my current directory at 2.1TB each:
>> -rw-r--r-- 1 root root 2.1T May 30 21:23 overlay-sdb1
>> -rw-r--r-- 1 root root 2.1T May 30 21:23 overlay-sdc1
>> -rw-r--r-- 1 root root 2.1T May 30 21:23 overlay-sdd1
>> -rw-r--r-- 1 root root 2.1T May 30 21:23 overlay-sde1
>>
>> The loop devices appear to be created:
>> root@mythserver:~# losetup -a
>> /dev/loop0: [0807]:58851784 (/root/overlay-sdb1)
>> /dev/loop1: [0807]:58851786 (/root/overlay-sdc1)
>> /dev/loop2: [0807]:58851787 (/root/overlay-sdd1)
>> /dev/loop3: [0807]:58851792 (/root/overlay-sde1)
>>
>> These are the entries that are piped into 'dmsetup create {/}':
>> 0 3907024002 snapshot /dev/sdb1 /dev/loop0 P 8
>> 0 3907024002 snapshot /dev/sdc1 /dev/loop1 P 8
>> 0 3907024002 snapshot /dev/sdd1 /dev/loop2 P 8
>> 0 3907024002 snapshot /dev/sde1 /dev/loop3 P 8
>>
>> Nothing has been created in /dev/mapper/
>> root@mythserver:~# l /dev/mapper/
>> total 0
>> crw------- 1 root root 10, 236 May 30 23:55 control
>
> These exercises to make overlays are rarely needed, and don't appear to
> have been created as intended.
>
> Please just round up the data requested and report back. (Paste text
> inline, or use plain text attachments, please.)
>
> We may want more data later (like smartctl reports), but items #1-#4 are
> needed now.
Because each of the drives had some read errors, I thought it would be
safer to make the first attempt with overlays. There is always the
possibility of me entering command incorrectly too :)
>
> Phil
>
[-- Attachment #2: raid.status.txt --]
[-- Type: text/plain, Size: 4161 bytes --]
/dev/sdc1:
Magic : a92b4efc
Version : 0.90.00
UUID : 44ecd957:d23c44b1:b6641343:7cc40f45 (local to host oracle)
Creation Time : Mon Mar 28 00:37:02 2011
Raid Level : raid5
Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB)
Array Size : 5860535808 (5589.04 GiB 6001.19 GB)
Raid Devices : 4
Total Devices : 3
Preferred Minor : 0
Update Time : Sat May 11 01:17:20 2013
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 2
Spare Devices : 0
Checksum : 71052be - correct
Events : 42810
Layout : left-symmetric
Chunk Size : 256K
Number Major Minor RaidDevice State
this 0 8 33 0 active sync /dev/sdc1
0 0 8 33 0 active sync /dev/sdc1
1 1 0 0 1 faulty removed
2 2 0 0 2 faulty removed
3 3 8 81 3 active sync /dev/sdf1
/dev/sdd1:
Magic : a92b4efc
Version : 0.90.00
UUID : 44ecd957:d23c44b1:b6641343:7cc40f45 (local to host oracle)
Creation Time : Mon Mar 28 00:37:02 2011
Raid Level : raid5
Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB)
Array Size : 5860535808 (5589.04 GiB 6001.19 GB)
Raid Devices : 4
Total Devices : 3
Preferred Minor : 0
Update Time : Thu May 9 11:52:10 2013
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 1
Spare Devices : 0
Checksum : 70e445d - correct
Events : 42785
Layout : left-symmetric
Chunk Size : 256K
Number Major Minor RaidDevice State
this 1 8 49 1 active sync /dev/sdd1
0 0 8 33 0 active sync /dev/sdc1
1 1 8 49 1 active sync /dev/sdd1
2 2 0 0 2 faulty removed
3 3 8 81 3 active sync /dev/sdf1
/dev/sde1:
Magic : a92b4efc
Version : 0.90.00
UUID : 44ecd957:d23c44b1:b6641343:7cc40f45 (local to host oracle)
Creation Time : Mon Mar 28 00:37:02 2011
Raid Level : raid5
Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB)
Array Size : 5860535808 (5589.04 GiB 6001.19 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0
Update Time : Sat May 4 21:52:22 2013
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Checksum : 7080301 - correct
Events : 35760
Layout : left-symmetric
Chunk Size : 256K
Number Major Minor RaidDevice State
this 2 8 65 2 active sync /dev/sde1
0 0 8 33 0 active sync /dev/sdc1
1 1 8 49 1 active sync /dev/sdd1
2 2 8 65 2 active sync /dev/sde1
3 3 8 81 3 active sync /dev/sdf1
/dev/sdf1:
Magic : a92b4efc
Version : 0.90.00
UUID : 44ecd957:d23c44b1:b6641343:7cc40f45 (local to host oracle)
Creation Time : Mon Mar 28 00:37:02 2011
Raid Level : raid5
Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB)
Array Size : 5860535808 (5589.04 GiB 6001.19 GB)
Raid Devices : 4
Total Devices : 3
Preferred Minor : 0
Update Time : Sat May 11 01:17:20 2013
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 2
Spare Devices : 0
Checksum : 71052f4 - correct
Events : 42810
Layout : left-symmetric
Chunk Size : 256K
Number Major Minor RaidDevice State
this 3 8 81 3 active sync /dev/sdf1
0 0 8 33 0 active sync /dev/sdc1
1 1 0 0 1 faulty removed
2 2 0 0 2 faulty removed
3 3 8 81 3 active sync /dev/sdf1
[-- Attachment #3: smart_all.txt --]
[-- Type: text/plain, Size: 34016 bytes --]
smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.2.0-41-generic] (local build)
Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net
=== START OF INFORMATION SECTION ===
Model Family: SAMSUNG SpinPoint F4 EG (AFT)
Device Model: SAMSUNG HD204UI
Serial Number: S2H7JD2B105685
LU WWN Device Id: 5 0024e9 00492cb29
Firmware Version: 1AQ10001
User Capacity: 2,000,398,934,016 bytes [2.00 TB]
Sector Size: 512 bytes logical/physical
Device is: In smartctl database [for details use: -P show]
ATA Version is: 8
ATA Standard is: ATA-8-ACS revision 6
Local Time is: Sat May 11 22:25:50 2013 PDT
==> WARNING: Using smartmontools or hdparm with this
drive may result in data loss due to a firmware bug.
****** THIS DRIVE MAY OR MAY NOT BE AFFECTED! ******
Buggy and fixed firmware report same version number!
See the following web pages for details:
http://www.samsung.com/global/business/hdd/faqView.do?b2b_bbs_msg_id=386
http://sourceforge.net/apps/trac/smartmontools/wiki/SamsungF4EGBadBlocks
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x00) Offline data collection activity
was never started.
Auto Offline Data Collection: Disabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: (19920) seconds.
Offline data collection
capabilities: (0x5b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
No Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 255) minutes.
SCT capabilities: (0x003f) SCT Status supported.
SCT Error Recovery Control supported.
SCT Feature Control supported.
SCT Data Table supported.
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x002f 100 100 051 Pre-fail Always - 4829
2 Throughput_Performance 0x0026 252 252 000 Old_age Always - 0
3 Spin_Up_Time 0x0023 068 068 025 Pre-fail Always - 9707
4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 55
5 Reallocated_Sector_Ct 0x0033 252 252 010 Pre-fail Always - 0
7 Seek_Error_Rate 0x002e 252 252 051 Old_age Always - 0
8 Seek_Time_Performance 0x0024 252 252 015 Old_age Offline - 0
9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 18611
10 Spin_Retry_Count 0x0032 252 252 051 Old_age Always - 0
11 Calibration_Retry_Count 0x0032 252 252 000 Old_age Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 60
181 Program_Fail_Cnt_Total 0x0022 099 099 000 Old_age Always - 29874886
191 G-Sense_Error_Rate 0x0022 252 252 000 Old_age Always - 0
192 Power-Off_Retract_Count 0x0022 252 252 000 Old_age Always - 0
194 Temperature_Celsius 0x0002 064 062 000 Old_age Always - 36 (Min/Max 18/39)
195 Hardware_ECC_Recovered 0x003a 100 100 000 Old_age Always - 0
196 Reallocated_Event_Count 0x0032 252 252 000 Old_age Always - 0
197 Current_Pending_Sector 0x0032 252 252 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0030 252 252 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x0036 200 200 000 Old_age Always - 0
200 Multi_Zone_Error_Rate 0x002a 100 100 000 Old_age Always - 8
223 Load_Retry_Count 0x0032 252 252 000 Old_age Always - 0
225 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 60
SMART Error Log Version: 1
No Errors Logged
SMART Self-test log structure revision number 1
No self-tests have been logged. [To run self-tests, use: smartctl -t]
Note: selective self-test log revision number (0) not 1 implies that no selective self-test has ever been run
SMART Selective self-test log data structure revision number 0
Note: revision number not 1 implies that no selective self-test has ever been run
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Completed [00% left] (0-65535)
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.2.0-41-generic] (local build)
Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net
=== START OF INFORMATION SECTION ===
Model Family: SAMSUNG SpinPoint F4 EG (AFT)
Device Model: SAMSUNG HD204UI
Serial Number: S2H7JD2B105688
LU WWN Device Id: 5 0024e9 00492cb41
Firmware Version: 1AQ10001
User Capacity: 2,000,398,934,016 bytes [2.00 TB]
Sector Size: 512 bytes logical/physical
Device is: In smartctl database [for details use: -P show]
ATA Version is: 8
ATA Standard is: ATA-8-ACS revision 6
Local Time is: Sat May 11 22:40:51 2013 PDT
==> WARNING: Using smartmontools or hdparm with this
drive may result in data loss due to a firmware bug.
****** THIS DRIVE MAY OR MAY NOT BE AFFECTED! ******
Buggy and fixed firmware report same version number!
See the following web pages for details:
http://www.samsung.com/global/business/hdd/faqView.do?b2b_bbs_msg_id=386
http://sourceforge.net/apps/trac/smartmontools/wiki/SamsungF4EGBadBlocks
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x00) Offline data collection activity
was never started.
Auto Offline Data Collection: Disabled.
Self-test execution status: ( 17) The self-test routine was aborted by
the host.
Total time to complete Offline
data collection: (19980) seconds.
Offline data collection
capabilities: (0x5b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
No Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 255) minutes.
SCT capabilities: (0x003f) SCT Status supported.
SCT Error Recovery Control supported.
SCT Feature Control supported.
SCT Data Table supported.
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x002f 100 100 051 Pre-fail Always - 23068
2 Throughput_Performance 0x0026 252 252 000 Old_age Always - 0
3 Spin_Up_Time 0x0023 068 061 025 Pre-fail Always - 9828
4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 55
5 Reallocated_Sector_Ct 0x0033 252 252 010 Pre-fail Always - 0
7 Seek_Error_Rate 0x002e 252 252 051 Old_age Always - 0
8 Seek_Time_Performance 0x0024 252 252 015 Old_age Offline - 0
9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 18610
10 Spin_Retry_Count 0x0032 252 252 051 Old_age Always - 0
11 Calibration_Retry_Count 0x0032 252 252 000 Old_age Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 60
181 Program_Fail_Cnt_Total 0x0022 099 099 000 Old_age Always - 29540210
191 G-Sense_Error_Rate 0x0022 100 100 000 Old_age Always - 7
192 Power-Off_Retract_Count 0x0022 252 252 000 Old_age Always - 0
194 Temperature_Celsius 0x0002 063 058 000 Old_age Always - 37 (Min/Max 18/42)
195 Hardware_ECC_Recovered 0x003a 100 100 000 Old_age Always - 0
196 Reallocated_Event_Count 0x0032 252 252 000 Old_age Always - 0
197 Current_Pending_Sector 0x0032 099 099 000 Old_age Always - 178
198 Offline_Uncorrectable 0x0030 252 252 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x0036 200 200 000 Old_age Always - 0
200 Multi_Zone_Error_Rate 0x002a 100 100 000 Old_age Always - 8
223 Load_Retry_Count 0x0032 252 252 000 Old_age Always - 0
225 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 60
SMART Error Log Version: 1
ATA Error Count: 424 (device log contains only the most recent five errors)
CR = Command Register [HEX]
FR = Features Register [HEX]
SC = Sector Count Register [HEX]
SN = Sector Number Register [HEX]
CL = Cylinder Low Register [HEX]
CH = Cylinder High Register [HEX]
DH = Device/Head Register [HEX]
DC = Device Command Register [HEX]
ER = Error register [HEX]
ST = Status register [HEX]
Powered_Up_Time is measured from power on, and printed as
DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
SS=sec, and sss=millisec. It "wraps" after 49.710 days.
Error 424 occurred at disk power-on lifetime: 18593 hours (774 days + 17 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
40 51 04 b3 f2 56 e8 Error: UNC 4 sectors at LBA = 0x0856f2b3 = 139915955
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
c8 00 04 b3 f2 56 e8 00 00:00:13.016 READ DMA
27 00 00 00 00 00 e0 00 00:00:13.016 READ NATIVE MAX ADDRESS EXT
ec 00 00 00 00 00 a0 00 00:00:13.016 IDENTIFY DEVICE
ec 00 00 00 00 00 a0 00 00:00:13.016 IDENTIFY DEVICE
ef 03 44 00 00 00 a0 00 00:00:13.016 SET FEATURES [Set transfer mode]
Error 423 occurred at disk power-on lifetime: 18593 hours (774 days + 17 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
40 51 04 b3 f2 56 e8 Error: UNC 4 sectors at LBA = 0x0856f2b3 = 139915955
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
c8 00 04 b3 f2 56 e8 00 00:00:13.011 READ DMA
27 00 00 00 00 00 e0 00 00:00:13.011 READ NATIVE MAX ADDRESS EXT
ec 00 00 00 00 00 a0 00 00:00:13.011 IDENTIFY DEVICE
ec 00 00 00 00 00 a0 00 00:00:13.011 IDENTIFY DEVICE
ef 03 44 00 00 00 a0 00 00:00:13.011 SET FEATURES [Set transfer mode]
Error 422 occurred at disk power-on lifetime: 18593 hours (774 days + 17 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
40 51 04 b3 f2 56 e8 Error: UNC 4 sectors at LBA = 0x0856f2b3 = 139915955
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
c8 00 04 b3 f2 56 e8 00 00:00:13.006 READ DMA
27 00 00 00 00 00 e0 00 00:00:13.006 READ NATIVE MAX ADDRESS EXT
27 00 00 00 00 00 e0 00 00:00:13.006 READ NATIVE MAX ADDRESS EXT
ec 00 00 00 00 00 a0 00 00:00:13.006 IDENTIFY DEVICE
ef 03 44 00 00 00 a0 00 00:00:13.006 SET FEATURES [Set transfer mode]
Error 421 occurred at disk power-on lifetime: 18593 hours (774 days + 17 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
40 51 04 b3 f2 56 e8 Error: UNC 4 sectors at LBA = 0x0856f2b3 = 139915955
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
c8 00 04 b3 f2 56 e8 00 00:00:13.001 READ DMA
27 00 00 00 00 00 e0 00 00:00:13.001 READ NATIVE MAX ADDRESS EXT
27 00 00 00 00 00 e0 00 00:00:13.001 READ NATIVE MAX ADDRESS EXT
ec 00 00 00 00 00 a0 00 00:00:13.001 IDENTIFY DEVICE
ef 03 44 00 00 00 a0 00 00:00:13.001 SET FEATURES [Set transfer mode]
Error 420 occurred at disk power-on lifetime: 18593 hours (774 days + 17 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
40 51 02 b1 f2 56 e8 Error: UNC 2 sectors at LBA = 0x0856f2b1 = 139915953
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
c8 00 02 b1 f2 56 e8 00 00:00:12.996 READ DMA
27 00 00 00 00 00 e0 00 00:00:12.996 READ NATIVE MAX ADDRESS EXT
27 00 00 00 00 00 e0 00 00:00:12.996 READ NATIVE MAX ADDRESS EXT
ec 00 00 00 00 00 a0 00 00:00:12.996 IDENTIFY DEVICE
ef 03 44 00 00 00 a0 00 00:00:12.996 SET FEATURES [Set transfer mode]
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Short offline Aborted by host 10% 18590 -
# 2 Extended offline Aborted by host 10% 18589 -
Note: selective self-test log revision number (0) not 1 implies that no selective self-test has ever been run
SMART Selective self-test log data structure revision number 0
Note: revision number not 1 implies that no selective self-test has ever been run
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Aborted_by_host [10% left] (0-65535)
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.2.0-41-generic] (local build)
Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net
=== START OF INFORMATION SECTION ===
Model Family: SAMSUNG SpinPoint F4 EG (AFT)
Device Model: SAMSUNG HD204UI
Serial Number: S2H7JD2B105686
LU WWN Device Id: 5 0024e9 00492cb2e
Firmware Version: 1AQ10001
User Capacity: 2,000,398,934,016 bytes [2.00 TB]
Sector Size: 512 bytes logical/physical
Device is: In smartctl database [for details use: -P show]
ATA Version is: 8
ATA Standard is: ATA-8-ACS revision 6
Local Time is: Sat May 11 22:40:56 2013 PDT
==> WARNING: Using smartmontools or hdparm with this
drive may result in data loss due to a firmware bug.
****** THIS DRIVE MAY OR MAY NOT BE AFFECTED! ******
Buggy and fixed firmware report same version number!
See the following web pages for details:
http://www.samsung.com/global/business/hdd/faqView.do?b2b_bbs_msg_id=386
http://sourceforge.net/apps/trac/smartmontools/wiki/SamsungF4EGBadBlocks
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x00) Offline data collection activity
was never started.
Auto Offline Data Collection: Disabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: (21060) seconds.
Offline data collection
capabilities: (0x5b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
No Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 255) minutes.
SCT capabilities: (0x003f) SCT Status supported.
SCT Error Recovery Control supported.
SCT Feature Control supported.
SCT Data Table supported.
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x002f 100 100 051 Pre-fail Always - 3
2 Throughput_Performance 0x0026 252 252 000 Old_age Always - 0
3 Spin_Up_Time 0x0023 067 067 025 Pre-fail Always - 10035
4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 56
5 Reallocated_Sector_Ct 0x0033 252 252 010 Pre-fail Always - 0
7 Seek_Error_Rate 0x002e 252 252 051 Old_age Always - 0
8 Seek_Time_Performance 0x0024 252 252 015 Old_age Offline - 0
9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 18610
10 Spin_Retry_Count 0x0032 252 252 051 Old_age Always - 0
11 Calibration_Retry_Count 0x0032 252 252 000 Old_age Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 60
181 Program_Fail_Cnt_Total 0x0022 095 095 000 Old_age Always - 127428242
191 G-Sense_Error_Rate 0x0022 252 252 000 Old_age Always - 0
192 Power-Off_Retract_Count 0x0022 252 252 000 Old_age Always - 0
194 Temperature_Celsius 0x0002 060 057 000 Old_age Always - 40 (Min/Max 17/43)
195 Hardware_ECC_Recovered 0x003a 100 100 000 Old_age Always - 0
196 Reallocated_Event_Count 0x0032 252 252 000 Old_age Always - 0
197 Current_Pending_Sector 0x0032 252 252 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0030 252 252 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x0036 099 099 000 Old_age Always - 552
200 Multi_Zone_Error_Rate 0x002a 100 100 000 Old_age Always - 0
223 Load_Retry_Count 0x0032 252 252 000 Old_age Always - 0
225 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 60
SMART Error Log Version: 1
ATA Error Count: 6 (device log contains only the most recent five errors)
CR = Command Register [HEX]
FR = Features Register [HEX]
SC = Sector Count Register [HEX]
SN = Sector Number Register [HEX]
CL = Cylinder Low Register [HEX]
CH = Cylinder High Register [HEX]
DH = Device/Head Register [HEX]
DC = Device Command Register [HEX]
ER = Error register [HEX]
ST = Status register [HEX]
Powered_Up_Time is measured from power on, and printed as
DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
SS=sec, and sss=millisec. It "wraps" after 49.710 days.
Error 6 occurred at disk power-on lifetime: 17314 hours (721 days + 10 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
84 51 00 00 00 00 a0
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
ec 00 00 00 00 00 a0 00 00:15:10.042 IDENTIFY DEVICE
ef 03 45 00 00 00 a0 00 00:15:10.042 SET FEATURES [Set transfer mode]
27 00 00 00 00 00 e0 00 00:15:10.042 READ NATIVE MAX ADDRESS EXT
ec 00 00 00 00 00 a0 00 00:15:10.042 IDENTIFY DEVICE
00 00 01 01 00 00 00 00 00:15:10.042 NOP [Abort queued commands]
Error 5 occurred at disk power-on lifetime: 17314 hours (721 days + 10 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
84 51 00 00 00 00 a0
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
ec 00 00 00 00 00 a0 00 00:15:10.035 IDENTIFY DEVICE
00 00 01 01 00 00 40 00 00:15:10.035 NOP [Abort queued commands]
00 00 01 01 00 00 40 00 00:15:10.033 NOP [Abort queued commands]
2f 00 01 10 00 00 a0 08 00:15:10.033 READ LOG EXT
60 00 48 4f 9c c9 40 08 00:15:09.953 READ FPDMA QUEUED
Error 4 occurred at disk power-on lifetime: 17314 hours (721 days + 10 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
84 51 00 00 00 00 a0
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
ec 00 00 00 00 00 a0 00 00:15:10.025 IDENTIFY DEVICE
ec 00 00 00 00 00 a0 00 00:15:10.025 IDENTIFY DEVICE
00 00 01 01 00 00 40 00 00:15:10.025 NOP [Abort queued commands]
00 00 01 01 00 00 40 00 00:15:10.023 NOP [Abort queued commands]
2f 00 01 10 00 00 a0 08 00:15:10.023 READ LOG EXT
Error 3 occurred at disk power-on lifetime: 17314 hours (721 days + 10 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
84 51 00 00 00 00 a0
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
ec 00 00 00 00 00 a0 00 00:15:10.016 IDENTIFY DEVICE
00 00 01 01 00 00 00 00 00:15:10.016 NOP [Abort queued commands]
00 00 01 01 00 00 00 00 00:15:10.014 NOP [Abort queued commands]
ec 00 00 00 00 00 a0 00 00:15:10.009 IDENTIFY DEVICE
00 00 01 01 00 00 40 00 00:15:10.009 NOP [Abort queued commands]
Error 2 occurred at disk power-on lifetime: 17314 hours (721 days + 10 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
84 51 00 00 00 00 a0
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
ec 00 00 00 00 00 a0 00 00:15:10.009 IDENTIFY DEVICE
00 00 01 01 00 00 40 00 00:15:10.009 NOP [Abort queued commands]
00 00 01 01 00 00 40 00 00:15:10.006 NOP [Abort queued commands]
2f 00 01 10 00 00 a0 08 00:15:10.006 READ LOG EXT
60 00 48 4f 9c c9 40 08 00:15:09.952 READ FPDMA QUEUED
SMART Self-test log structure revision number 1
No self-tests have been logged. [To run self-tests, use: smartctl -t]
Note: selective self-test log revision number (0) not 1 implies that no selective self-test has ever been run
SMART Selective self-test log data structure revision number 0
Note: revision number not 1 implies that no selective self-test has ever been run
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Completed [00% left] (0-65535)
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.2.0-41-generic] (local build)
Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net
=== START OF INFORMATION SECTION ===
Model Family: SAMSUNG SpinPoint F4 EG (AFT)
Device Model: SAMSUNG HD204UI
Serial Number: S2H7JD2B105687
LU WWN Device Id: 5 0024e9 00492cb32
Firmware Version: 1AQ10001
User Capacity: 2,000,398,934,016 bytes [2.00 TB]
Sector Size: 512 bytes logical/physical
Device is: In smartctl database [for details use: -P show]
ATA Version is: 8
ATA Standard is: ATA-8-ACS revision 6
Local Time is: Sat May 11 22:41:01 2013 PDT
==> WARNING: Using smartmontools or hdparm with this
drive may result in data loss due to a firmware bug.
****** THIS DRIVE MAY OR MAY NOT BE AFFECTED! ******
Buggy and fixed firmware report same version number!
See the following web pages for details:
http://www.samsung.com/global/business/hdd/faqView.do?b2b_bbs_msg_id=386
http://sourceforge.net/apps/trac/smartmontools/wiki/SamsungF4EGBadBlocks
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x00) Offline data collection activity
was never started.
Auto Offline Data Collection: Disabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: (20100) seconds.
Offline data collection
capabilities: (0x5b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
No Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 255) minutes.
SCT capabilities: (0x003f) SCT Status supported.
SCT Error Recovery Control supported.
SCT Feature Control supported.
SCT Data Table supported.
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x002f 100 100 051 Pre-fail Always - 0
2 Throughput_Performance 0x0026 252 252 000 Old_age Always - 0
3 Spin_Up_Time 0x0023 068 067 025 Pre-fail Always - 9815
4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 55
5 Reallocated_Sector_Ct 0x0033 252 252 010 Pre-fail Always - 0
7 Seek_Error_Rate 0x002e 252 252 051 Old_age Always - 0
8 Seek_Time_Performance 0x0024 252 252 015 Old_age Offline - 0
9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 18611
10 Spin_Retry_Count 0x0032 252 252 051 Old_age Always - 0
11 Calibration_Retry_Count 0x0032 252 252 000 Old_age Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 60
181 Program_Fail_Cnt_Total 0x0022 091 091 000 Old_age Always - 198135164
191 G-Sense_Error_Rate 0x0022 252 252 000 Old_age Always - 0
192 Power-Off_Retract_Count 0x0022 252 252 000 Old_age Always - 0
194 Temperature_Celsius 0x0002 062 057 000 Old_age Always - 38 (Min/Max 17/43)
195 Hardware_ECC_Recovered 0x003a 100 100 000 Old_age Always - 0
196 Reallocated_Event_Count 0x0032 252 252 000 Old_age Always - 0
197 Current_Pending_Sector 0x0032 252 252 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0030 252 252 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x0036 100 100 000 Old_age Always - 23
200 Multi_Zone_Error_Rate 0x002a 100 100 000 Old_age Always - 0
223 Load_Retry_Count 0x0032 252 252 000 Old_age Always - 0
225 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 60
SMART Error Log Version: 1
ATA Error Count: 1
CR = Command Register [HEX]
FR = Features Register [HEX]
SC = Sector Count Register [HEX]
SN = Sector Number Register [HEX]
CL = Cylinder Low Register [HEX]
CH = Cylinder High Register [HEX]
DH = Device/Head Register [HEX]
DC = Device Command Register [HEX]
ER = Error register [HEX]
ST = Status register [HEX]
Powered_Up_Time is measured from power on, and printed as
DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
SS=sec, and sss=millisec. It "wraps" after 49.710 days.
Error 1 occurred at disk power-on lifetime: 17266 hours (719 days + 10 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
84 51 00 00 00 00 40
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
18 9f 18 9f 18 f0 18 9f 10:54:18.685 RECALIBRATE [RET-4]
00 00 00 00 00 00 00 08 00:12:14.743 NOP [Abort queued commands]
61 00 00 3f 10 f4 40 08 00:12:14.746 WRITE FPDMA QUEUED
61 00 00 3f 0c f4 40 08 00:12:14.746 WRITE FPDMA QUEUED
61 00 00 3f 0a f4 40 08 00:12:14.746 WRITE FPDMA QUEUED
SMART Self-test log structure revision number 1
No self-tests have been logged. [To run self-tests, use: smartctl -t]
Note: selective self-test log revision number (0) not 1 implies that no selective self-test has ever been run
SMART Selective self-test log data structure revision number 0
Note: revision number not 1 implies that no selective self-test has ever been run
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Completed [00% left] (0-65535)
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Wiki-recovering failed raid, overlay problem
2013-06-02 0:40 ` Chris Finley
@ 2013-06-02 1:32 ` Phil Turmel
2013-06-02 4:25 ` Phil Turmel
2013-06-02 5:07 ` Chris Finley
0 siblings, 2 replies; 9+ messages in thread
From: Phil Turmel @ 2013-06-02 1:32 UTC (permalink / raw)
To: Chris Finley
On 06/01/2013 08:40 PM, Chris Finley wrote:
> On Sat, Jun 1, 2013 at 4:30 PM, Phil Turmel <philip@turmel.org> wrote:
>> Hi Chris,
>>
>> On 06/01/2013 02:23 AM, Chris Finley wrote:
>>> I am trying to recover a failed Raid 5 array by following the guide at
>>> https://raid.wiki.kernel.org/index.php/Recovering_a_failed_software_RAID
>>
>> Stop. Report the *critical* details of your setup. At least:
>
> Thank you for the reply.
>
> Oh, yes. I'm the guy from an earlier post:
> http://marc.info/?l=linux-raid&m=136840333618808&w=2
I missed it--I must have been busy.
> Because each of the drives had some read errors, I thought it would be
> safer to make the first attempt with overlays. There is always the
> possibility of me entering command incorrectly too :)
As long as the original metadata is still present, mdadm is quite
robust. Overlays are useful when you don't know the original metadata
properties and don't have enough spare drives.
The material provided is quite complete, but lacks a correlation between
device names and drive serial numbers. I'd like some more confidence there:
Please show the output of my 'lsdrv' script [1] as your system is now
set up.
Your drive with S/N S2H7JD2B105688 seems to be the worst, with
triple-digit pending sectors. This suggests a mismatch between your
drives' error correction time limits and the linux drivers' default
timeout. And a lack of regular scrubbing to clean up pending sectors.
"smartctl -l scterc" for each drive would give useful information.
Anyways, the drive may not be really failing--it has zero relocations.
If S2H7JD2B105688 was the old /dev/sdd, then it doesn't matter, but
you've now lost the opportunity to correct those sectors.
Phil
[1] http://github.com/pturmel/lsdrv/
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Wiki-recovering failed raid, overlay problem
2013-06-02 1:32 ` Phil Turmel
@ 2013-06-02 4:25 ` Phil Turmel
2013-06-02 5:07 ` Chris Finley
1 sibling, 0 replies; 9+ messages in thread
From: Phil Turmel @ 2013-06-02 4:25 UTC (permalink / raw)
To: Chris Finley; +Cc: linux-raid
Whoops, noticed that I dropped the list... Convention on kernel.org is
reply-to-all, as non-subscribers are welcome.
On 06/01/2013 09:32 PM, Phil Turmel wrote:
> On 06/01/2013 08:40 PM, Chris Finley wrote:
>> On Sat, Jun 1, 2013 at 4:30 PM, Phil Turmel <philip@turmel.org> wrote:
>>> Hi Chris,
>>>
>>> On 06/01/2013 02:23 AM, Chris Finley wrote:
>>>> I am trying to recover a failed Raid 5 array by following the guide at
>>>> https://raid.wiki.kernel.org/index.php/Recovering_a_failed_software_RAID
>>>
>>> Stop. Report the *critical* details of your setup. At least:
>>
>> Thank you for the reply.
>>
>> Oh, yes. I'm the guy from an earlier post:
>> http://marc.info/?l=linux-raid&m=136840333618808&w=2
>
> I missed it--I must have been busy.
>
>> Because each of the drives had some read errors, I thought it would be
>> safer to make the first attempt with overlays. There is always the
>> possibility of me entering command incorrectly too :)
>
> As long as the original metadata is still present, mdadm is quite
> robust. Overlays are useful when you don't know the original metadata
> properties and don't have enough spare drives.
>
> The material provided is quite complete, but lacks a correlation between
> device names and drive serial numbers. I'd like some more confidence there:
>
> Please show the output of my 'lsdrv' script [1] as your system is now
> set up.
>
> Your drive with S/N S2H7JD2B105688 seems to be the worst, with
> triple-digit pending sectors. This suggests a mismatch between your
> drives' error correction time limits and the linux drivers' default
> timeout. And a lack of regular scrubbing to clean up pending sectors.
> "smartctl -l scterc" for each drive would give useful information.
> Anyways, the drive may not be really failing--it has zero relocations.
>
> If S2H7JD2B105688 was the old /dev/sdd, then it doesn't matter, but
> you've now lost the opportunity to correct those sectors.
>
> Phil
>
> [1] http://github.com/pturmel/lsdrv/
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Wiki-recovering failed raid, overlay problem
2013-06-02 1:32 ` Phil Turmel
2013-06-02 4:25 ` Phil Turmel
@ 2013-06-02 5:07 ` Chris Finley
2013-06-02 13:53 ` Phil Turmel
1 sibling, 1 reply; 9+ messages in thread
From: Chris Finley @ 2013-06-02 5:07 UTC (permalink / raw)
To: Phil Turmel, linux-raid
>
> Please show the output of my 'lsdrv' script [1] as your system is now
> set up.
>
# ./raidfail/lsdrv
PCI [ahci] 00:1f.2 SATA controller: Intel Corporation 82801IR/IO/IH
(ICH9R/DO/DH) 6 port SATA Controller [AHCI mode] (rev 02)
├scsi 0:0:0:0 ATA WDC WD20EARX-00P {WD-WCAZAL145223}
│└sda 1.82t [8:0] Partitioned (dos)
│ ├sda1 4.66g [8:1] ext4 {23b488a2-5a22-487a-a83f-bfa761754617}
│ │└Mounted as /dev/sda1 @ /boot
│ ├sda2 1.00k [8:2] Partitioned (dos)
│ ├sda5 29.80g [8:5] swap {720281cd-d82f-4368-ae44-f68408f28282}
│ ├sda6 51.22g [8:6] ext4 {321440e1-4078-4605-9d3d-4419bcb4d618}
│ │└Mounted as /dev/sda6 @ /var
│ └sda7 1.74t [8:7] ext4 {f52145cc-c13f-4230-89a0-e2a343f956f7}
│ └Mounted as /dev/disk/by-uuid/f52145cc-c13f-4230-89a0-e2a343f956f7 @ /
├scsi 1:x:x:x [Empty]
├scsi 2:x:x:x [Empty]
├scsi 3:x:x:x [Empty]
├scsi 4:x:x:x [Empty]
└scsi 5:x:x:x [Empty]
PCI [ahci] 04:00.0 SATA controller: JMicron Technology Corp. JMB363
SATA/IDE Controller (rev 02)
├scsi 6:0:0:0 ATA ST2000DL004 HD20 {S2H7J9FC302772}
│└sdb 1.82t [8:16] Partitioned (dos)
│ └sdb1 1.82t [8:17] MD raid5 (4) inactive
{44ecd957-d23c-44b1-b664-13437cc40f45}
└scsi 7:x:x:x [Empty]
PCI [sata_sil24] 07:01.0 RAID bus controller: Silicon Image, Inc. SiI
3124 PCI-X Serial ATA Controller (rev 02)
├scsi 8:0:0:0 ATA SAMSUNG HD204UI {S2H7JD2B105685}
│└sdc 1.82t [8:32] Partitioned (dos)
│ └sdc1 1.82t [8:33] MD raid5 (4) inactive
{44ecd957-d23c-44b1-b664-13437cc40f45}
├scsi 9:x:x:x [Empty]
├scsi 10:0:0:0 ATA SAMSUNG HD204UI {S2H7JD2B105686}
│└sdd 1.82t [8:48] Partitioned (dos)
│ └sdd1 1.82t [8:49] MD raid5 (4) inactive
{44ecd957-d23c-44b1-b664-13437cc40f45}
└scsi 11:0:0:0 ATA SAMSUNG HD204UI {S2H7JD2B105687}
└sde 1.82t [8:64] Partitioned (dos)
└sde1 1.82t [8:65] MD raid5 (4) inactive
{44ecd957-d23c-44b1-b664-13437cc40f45}
PCI [pata_jmicron] 04:00.1 IDE interface: JMicron Technology Corp.
JMB363 SATA/IDE Controller (rev 02)
├scsi 12:0:0:0 LITE-ON DVDRW SHM-165H6S {LITE-ON_DVDRW_SHM-165H6S}
│└sr0 1.00g [11:0] Empty/Unknown
└scsi 13:x:x:x [Empty]
Other Block Devices
├loop0 0.00k [7:0] Empty/Unknown
├loop1 0.00k [7:1] Empty/Unknown
├loop2 0.00k [7:2] Empty/Unknown
├loop3 0.00k [7:3] Empty/Unknown
├loop4 0.00k [7:4] Empty/Unknown
├loop5 0.00k [7:5] Empty/Unknown
├loop6 0.00k [7:6] Empty/Unknown
├loop7 0.00k [7:7] Empty/Unknown
├ram0 64.00m [1:0] Empty/Unknown
├ram1 64.00m [1:1] Empty/Unknown
├ram2 64.00m [1:2] Empty/Unknown
├ram3 64.00m [1:3] Empty/Unknown
├ram4 64.00m [1:4] Empty/Unknown
├ram5 64.00m [1:5] Empty/Unknown
├ram6 64.00m [1:6] Empty/Unknown
├ram7 64.00m [1:7] Empty/Unknown
├ram8 64.00m [1:8] Empty/Unknown
├ram9 64.00m [1:9] Empty/Unknown
├ram10 64.00m [1:10] Empty/Unknown
├ram11 64.00m [1:11] Empty/Unknown
├ram12 64.00m [1:12] Empty/Unknown
├ram13 64.00m [1:13] Empty/Unknown
├ram14 64.00m [1:14] Empty/Unknown
└ram15 64.00m [1:15] Empty/Unknown
> Your drive with S/N S2H7JD2B105688 seems to be the worst, with
> triple-digit pending sectors. This suggests a mismatch between your
> drives' error correction time limits and the linux drivers' default
> timeout.
I'm not sure that I understand this. Wouldn't the drive move a bad
sector regardless of the OS timeout?
Can you point me to more information on correcting the time limits?
The change in device mapping went like this:
At Failure --> Now
sdc --> sdc
sdd (2nd drop, most errors) --> ddrescue to sdb and then unplugged
sde (1st drop, low event count) --> sdd
sdf --> sde
> And a lack of regular scrubbing to clean up pending sectors.
> "smartctl -l scterc" for each drive would give useful information.
> Anyways, the drive may not be really failing--it has zero relocations.
>
> If S2H7JD2B105688 was the old /dev/sdd, then it doesn't matter, but
> you've now lost the opportunity to correct those sectors.
The failed sdd has the serial number S2H7JD2B105688. I still have the
drive, it's just unplugged.
Running "smartctl -l scterc" produces some interesting results.
# smartctl -l scterc /dev/sdb
smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.2.0-44-generic] (local build)
Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net
SCT Error Recovery Control:
Read: Disabled
Write: Disabled
# smartctl -l scterc /dev/sdc
smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.2.0-44-generic] (local build)
Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net
SCT Error Recovery Control:
Read: Disabled
Write: Disabled
l# smartctl -l scterc /dev/sdd
smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.2.0-44-generic] (local build)
Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net
SCT Error Recovery Control:
Read: Disabled
Write: Disabled
# smartctl -l scterc /dev/sde
smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.2.0-44-generic] (local build)
Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net
SCT Error Recovery Control:
Read: Disabled
Write: Disabled
What is going on here? How would error recovery get disabled?
>
> Phil
>
> [1] http://github.com/pturmel/lsdrv/
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Wiki-recovering failed raid, overlay problem
2013-06-02 5:07 ` Chris Finley
@ 2013-06-02 13:53 ` Phil Turmel
2013-06-03 23:35 ` Chris Finley
0 siblings, 1 reply; 9+ messages in thread
From: Phil Turmel @ 2013-06-02 13:53 UTC (permalink / raw)
To: Chris Finley; +Cc: linux-raid
On 06/02/2013 01:07 AM, Chris Finley wrote:
>>
>> Please show the output of my 'lsdrv' script [1] as your system is now
>> set up.
[trim /]
Ok. Documented.
>> Your drive with S/N S2H7JD2B105688 seems to be the worst, with
>> triple-digit pending sectors. This suggests a mismatch between your
>> drives' error correction time limits and the linux drivers' default
>> timeout.
>
> I'm not sure that I understand this. Wouldn't the drive move a bad
> sector regardless of the OS timeout?
No. If the drive takes longer than the linux driver (default 30
seconds) when encountering a typical unrecoverable read error, the
controller's attempt to reset the link disrupts the MD attempt to
rewrite the problem sector. This failed *write* kicks the drive out of
the array when it would otherwise be corrected.
This is almost certainly what happened to your first dropped drive. It
is otherwise healthy.
> Can you point me to more information on correcting the time limits?
There are numerous discussions in the archives... search them for
combinations of "scterc", "tler", and "ure".
> The change in device mapping went like this:
> At Failure --> Now
> sdc --> sdc
> sdd (2nd drop, most errors) --> ddrescue to sdb and then unplugged
> sde (1st drop, low event count) --> sdd
> sdf --> sde
So your device role order is /dev/sd{c,b,d,e}1.
>> And a lack of regular scrubbing to clean up pending sectors.
>> "smartctl -l scterc" for each drive would give useful information.
>> Anyways, the drive may not be really failing--it has zero relocations.
>>
>> If S2H7JD2B105688 was the old /dev/sdd, then it doesn't matter, but
>> you've now lost the opportunity to correct those sectors.
>
> The failed sdd has the serial number S2H7JD2B105688. I still have the
> drive, it's just unplugged.
You may want to revisit this drive. ddrescue simply puts zeros where
the unreadable sectors were. A running raid5 or raid6 array will fix
those unreadable sectors when encountered, as long as the drive timeouts
are short.
> Running "smartctl -l scterc" produces some interesting results.
Sadly, no. These are what I expected. And they show the reason
consumer-grade desktop drives are not warranteed for use in raid arrays.
> # smartctl -l scterc /dev/sdb
> smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.2.0-44-generic] (local build)
> Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net
>
> SCT Error Recovery Control:
> Read: Disabled
> Write: Disabled
[trim /]
> What is going on here? How would error recovery get disabled?
On enterprise drives, or otherwise raid-rated drives, scterc defaults to
a small number on power-up, typically 7.0 seconds. This is perfect for
MD raid.
On desktop drives, sold for systems without raid, aggressive (long)
error recovery is good--the user would want the drive to make every
possible effort to retrieve its data. Most consumer drives will try for
two minutes or more, and will ignore any controller signals while doing
so. Unfortunately, this behavior breaks raid arrays.
Good desktop drives, like yours, offer a setting to adjust this
behavior. When needed, it must be set at every drive power up. You
need suitable commands in your startup scripts (rc.local or equivalent).
Most desktop drives do not even offer scterc. This protects the
manufacturers' markup for raid-rated drives. When the drive timeout
cannot be shortened, the linux driver timeout must be lengthened.
Again, one would need suitable commands in the system startup scripts.
Finally, raid arrays need to be exercised to encounter (and fix) the
UREs as they develop, so they don't accumulate. The only way to be sure
the entire data surface is read (including parity or mirror copies) is
to ask the array to "check" itself. I recommend this scrub on a weekly
basis.
Anyways, the quickest way for you to have a running array is to use
"mdadm --assemble --force /dev/md0 /dev/sd{c,b,e}1". This leaves out
the first dropped disk. Any remaining UREs cannot be corrected while
degraded, but the data on the first dropped disk is suspect.
Feel free to use an overlay on /dev/md0 itself while making your first
attempt to mount and access the data. If you cannot get critical data,
stop and re-assemble with all four devices.
Phil
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Wiki-recovering failed raid, overlay problem
2013-06-02 13:53 ` Phil Turmel
@ 2013-06-03 23:35 ` Chris Finley
2013-06-04 0:00 ` Phil Turmel
0 siblings, 1 reply; 9+ messages in thread
From: Chris Finley @ 2013-06-03 23:35 UTC (permalink / raw)
To: Phil Turmel; +Cc: linux-raid
On Sun, Jun 2, 2013 at 6:53 AM, Phil Turmel <philip@turmel.org> wrote:
[trim /]
>
> There are numerous discussions in the archives... search them for
> combinations of "scterc", "tler", and "ure".
>
It appears this has been a frequent issue over the last year. Thank
you for the background information, I understand what I was reading.
>
> So your device role order is /dev/sd{c,b,d,e}1.
[trim /]
> Anyways, the quickest way for you to have a running array is to use
> "mdadm --assemble --force /dev/md0 /dev/sd{c,b,e}1". This leaves out
> the first dropped disk. Any remaining UREs cannot be corrected while
> degraded, but the data on the first dropped disk is suspect.
>
> Feel free to use an overlay on /dev/md0 itself while making your first
> attempt to mount and access the data. If you cannot get critical data,
> stop and re-assemble with all four devices.
>
> Phil
>
Thanks, I will do that.
I am correct in thinking that I should not set scterc to 7 seconds
initially, since there will not be any parity to correct the read
errors? Best would be to set the driver time-out to 180 seconds until
after the array is rebuilt?
I am concerned about read errors during the rebuild. With a failed and
rebuilding array, will the get drive kicked on an URE? Is it better to
use something like badblocks or dd_rescue to correct/mark the sectors
first and then rebuild? Either way, I'm going to lose that data, but
maybe there are some better tools for extracting data from a bad
sector?
After the rebuild is complete, I should set the scterc to 7 seconds
and add a bitmap based write-intent log?
Does anyone learn these things the easy way :)
Much appreciated, Chris
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Wiki-recovering failed raid, overlay problem
2013-06-03 23:35 ` Chris Finley
@ 2013-06-04 0:00 ` Phil Turmel
0 siblings, 0 replies; 9+ messages in thread
From: Phil Turmel @ 2013-06-04 0:00 UTC (permalink / raw)
To: Chris Finley; +Cc: linux-raid
On 06/03/2013 07:35 PM, Chris Finley wrote:
> On Sun, Jun 2, 2013 at 6:53 AM, Phil Turmel <philip@turmel.org> wrote:
> [trim /]
>>
>> There are numerous discussions in the archives... search them for
>> combinations of "scterc", "tler", and "ure".
>>
> It appears this has been a frequent issue over the last year. Thank
> you for the background information, I understand what I was reading.
>
>>
>> So your device role order is /dev/sd{c,b,d,e}1.
>
> [trim /]
>
>> Anyways, the quickest way for you to have a running array is to use
>> "mdadm --assemble --force /dev/md0 /dev/sd{c,b,e}1". This leaves out
>> the first dropped disk. Any remaining UREs cannot be corrected while
>> degraded, but the data on the first dropped disk is suspect.
>>
>> Feel free to use an overlay on /dev/md0 itself while making your first
>> attempt to mount and access the data. If you cannot get critical data,
>> stop and re-assemble with all four devices.
>>
>> Phil
>>
> Thanks, I will do that.
>
> I am correct in thinking that I should not set scterc to 7 seconds
> initially, since there will not be any parity to correct the read
> errors? Best would be to set the driver time-out to 180 seconds until
> after the array is rebuilt?
Correct.
> I am concerned about read errors during the rebuild. With a failed and
> rebuilding array, will the get drive kicked on an URE? Is it better to
> use something like badblocks or dd_rescue to correct/mark the sectors
> first and then rebuild? Either way, I'm going to lose that data, but
> maybe there are some better tools for extracting data from a bad
> sector?
MD raid will tolerate a burst of up to 20 read errors on a device (in
one hour), and up to 10 per hour after that.
If a drive is booted out, just reassemble and resume your backup
efforts. Or you can use dd_rescue to avoid it. Six of one, ...
> After the rebuild is complete, I should set the scterc to 7 seconds
> and add a bitmap based write-intent log?
Yes. The write-intent log is useful but unrelated to your troubles.
You really need a weekly cron job that'll start a "check" scrub on your
array. But:
No, don't rebuild onto a 4th drive until after you make a backup of your
critical data. There's always the chance that the data you really want
isn't on any pending UREs. Rebuilding is sure to hit those.
> Does anyone learn these things the easy way :)
Apparently not. (Certainly not me.) Anyways, the enterprise and
hobbyist use cases are really quite different. Enterprise users, who
can easily justify premium components, have few problems. Hobbyists who
are trying to apply the original mean of "raid" (where "i" ==
inexpensive) are prone to problems.
And it isn't entirely the drive manufacturers' fault: solo duty in a
desktop really needs a different behavior than a member of a raid array.
However, I *do* fault vendors who have dropped scterc support to push
hobbyists into enterprise products. I think the market will punish them
in the long term.
> Much appreciated, Chris
You're welcome.
Phil
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2013-06-04 0:00 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-06-01 6:23 Wiki-recovering failed raid, overlay problem Chris Finley
2013-06-01 23:30 ` Phil Turmel
2013-06-02 0:40 ` Chris Finley
2013-06-02 1:32 ` Phil Turmel
2013-06-02 4:25 ` Phil Turmel
2013-06-02 5:07 ` Chris Finley
2013-06-02 13:53 ` Phil Turmel
2013-06-03 23:35 ` Chris Finley
2013-06-04 0:00 ` Phil Turmel
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).