* RAID 6 recovery issue
@ 2015-01-20 16:46 Graham Mitchell
2015-01-20 16:52 ` Roman Mamedov
0 siblings, 1 reply; 3+ messages in thread
From: Graham Mitchell @ 2015-01-20 16:46 UTC (permalink / raw)
To: linux-raid
I've been having a heck of a time sending this - apologies if anyone sees
this email more than once (I've not see it hit the lists either of the 2
previous times I've sent it).
Im having an issue with one of my RAID-6 arrays. For some reason, the email
wasnt set up, so I never found out I had a couple of bad drives in the
array until last night.
Originally, when I looked at the output of /proc/mdstat, it showed that the
array was running with 15 out of the 17 drives still running.
[gmitch@file00bert ~]$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid6 sde1[19] sdi1[16] sdh1[12] sdf1[4] sdr1[18] sdg1[5](F)
sdj1[7] sdo1[22] sdt1[14] sdd1[13] sdl1[0](F) sda1[20] sdb1[1] sdk1[21]
sdn1[10] sdc1[2] sdm1[15] sdq1[17]
7325752320 blocks super 1.2 level 6, 512k chunk, algorithm 2 [17/15]
[_UUUU_UUUUUUUUUUU]
[>....................] recovery = 0.4% (2421508/488383488)
finish=180.7min speed=44805K/sec
As you can see, device 19 (sde1) is showing as a normal member of the array.
My original plan was to partition off 500GB from one of the 1TB drives I
have spare in the server, add one partition to the array. Once that had been
done, I was going to carve off 500GB from the other drive, and let the
array rebuild with that.
I created the partition on one of the drives and was going to add it to the
array, but stopped when I saw that the array was in recovery (I started up
watch /proc/mdstat in another window).
I went to have dinner, and came back, and found that the array was now very
unhappy, and cat /proc/mdstat showed
[root@file00bert ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid6 sde1[19](S) sdi1[16] sdh1[12] sdf1[4] sdr1[18] sdg1[5](F)
sdj1[7] sdt1[14] sdd1[13] sdl1[0](F) sda1[20] sdb1[1] sdk1[21] sdn1[10]
sdc1[2] sdm1[15] sdq1[17]
7325752320 blocks super 1.2 level 6, 512k chunk, algorithm 2 [17/14]
[_UUUU_UUUUUUUUUU_]
With device 19 having gone from a live drive to a spare. Ive done an
examine of all the drives, and the event counts look to be reasonable
[root@file00bert ~]# mdadm -E /dev/sd[a-z]1 | egrep 'Event|/dev'
/dev/sda1:
Events : 1452687
/dev/sdb1:
Events : 1452687
/dev/sdc1:
Events : 1452687
/dev/sdd1:
Events : 1452687
/dev/sde1:
Events : 1452687
/dev/sdf1:
Events : 1452687
/dev/sdh1:
Events : 1452687
/dev/sdi1:
Events : 1452687
/dev/sdj1:
Events : 1452687
/dev/sdk1:
Events : 1452687
/dev/sdm1:
Events : 1452687
/dev/sdn1:
Events : 1452687
/dev/sdo1:
Events : 1452661
/dev/sdq1:
Events : 1452687
/dev/sdr1:
Events : 1452687
/dev/sdt1:
Events : 1452687
/dev/sdw1:
Events : 1431553
/dev/sdx1:
Events : 1431964
[root@file00bert ~]#
All of the events look to be within acceptable limits (are they?) and device
19 (sde1) has the same event count as most of the drives, but for some
reason it is now marked as a spare. Ive not stopped the array yet, but Ive
not written anything to it either. Im not sure if taking the array down
then restarting it with a force is the right course of action. My googling
isnt showing a conclusive answer, so I thought I should seek some advice
before I went and did something that wrecked the array.
What should my next steps to recover the array be? I think all I need to do
is somehow to get device 19 (sde1) back believing that it's a real member of
the array, rather than a spare? Or should I be kicking it out, and getting
things running with sdo1?
[root@file00bert ~]# uname -a
Linux file00bert.woodlea.org.uk 2.6.32-358.2.1.el6.x86_64 #1 SMP Wed Mar 13
00:26:49 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
[root@file00bert ~]# mdadm --version
mdadm - v3.2.5 - 18th May 2012
Thanks.
Graham
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: RAID 6 recovery issue
2015-01-20 16:46 RAID 6 recovery issue Graham Mitchell
@ 2015-01-20 16:52 ` Roman Mamedov
2015-01-20 18:32 ` Graham Mitchell
0 siblings, 1 reply; 3+ messages in thread
From: Roman Mamedov @ 2015-01-20 16:52 UTC (permalink / raw)
To: Graham Mitchell; +Cc: linux-raid
On Tue, 20 Jan 2015 11:46:45 -0500
"Graham Mitchell" <gmitch@woodlea.com> wrote:
> but for some reason it is now marked as a spare.
^^^^^^^^^^^^^^^
And you are not looking into 'dmesg' to find out why on purpose, to add some
mystery and suspense to the situation? :)
--
With respect,
Roman
^ permalink raw reply [flat|nested] 3+ messages in thread
* RE: RAID 6 recovery issue
2015-01-20 16:52 ` Roman Mamedov
@ 2015-01-20 18:32 ` Graham Mitchell
0 siblings, 0 replies; 3+ messages in thread
From: Graham Mitchell @ 2015-01-20 18:32 UTC (permalink / raw)
To: 'Roman Mamedov'; +Cc: 'linux-raid'
> to add some mystery and suspense to the situation? :)
Sorry, didn't want to make it too easy... :) It's been a long couple of
days...
Dmesg says nothing of interest at all
[root@file00bert log]# dmesg | grep sde
sde: sde1
md: bind<sde1>
/var/log/messages says something more, but not too much
[root@file00bert log]# cat messages-20150118 | grep sde
Jan 17 20:29:19 file00bert kernel: md: export_rdev(sde1)
Jan 17 20:29:19 file00bert kernel: md: bind<sde1>
Jan 17 22:35:24 file00bert kernel: md: unbind<sde1>
Jan 17 22:35:24 file00bert kernel: md: export_rdev(sde1)
Jan 17 23:03:05 file00bert kernel: sde: sde1
Jan 17 23:03:05 file00bert kernel: md: bind<sde1>
Not quite sure why md decided to unbind sde1...
Around that point in the messages file, I see this...
Jan 17 20:24:42 file00bert kernel: sdp:
Jan 17 20:27:57 file00bert kernel: sds:
Jan 17 20:29:19 file00bert kernel: md: export_rdev(sde1)
Jan 17 20:29:19 file00bert kernel: md: bind<sde1>
Jan 17 20:29:19 file00bert kernel: md: recovery of RAID array md0
Jan 17 20:29:19 file00bert kernel: md: minimum _guaranteed_ speed: 1000
KB/sec/disk.
Jan 17 20:29:19 file00bert kernel: md: using maximum available idle IO
bandwidth (but not more than 200000 KB/sec) for recovery.
Jan 17 20:29:19 file00bert kernel: md: using 128k window, over a total of
488383488k.
Jan 17 20:30:34 file00bert kernel: sd 0:0:14:0: [sdo] Synchronizing SCSI
cache
Jan 17 20:30:34 file00bert kernel: sd 0:0:14:0: [sdo] Unhandled error code
Jan 17 20:30:34 file00bert kernel: sd 0:0:14:0: [sdo] Result:
hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
Jan 17 20:30:34 file00bert kernel: sd 0:0:14:0: [sdo] CDB: Read(10): 28 00
00 5f 42 3f 00 00 e8 00
Jan 17 20:30:34 file00bert kernel: __ratelimit: 182 callbacks suppressed
Jan 17 20:30:34 file00bert kernel: md/raid:md0: Disk failure on sdo1,
disabling device.
Jan 17 20:30:34 file00bert kernel: md/raid:md0: Operation continuing on 14
devices.
Jan 17 20:30:34 file00bert kernel: md/raid:md0: read error not correctable
(sector 6242896 on sdo1).
Jan 17 20:30:34 file00bert kernel: md/raid:md0: read error not correctable
(sector 6242904 on sdo1).
Jan 17 20:30:34 file00bert kernel: md/raid:md0: read error not correctable
(sector 6242912 on sdo1).
Jan 17 20:30:34 file00bert kernel: md/raid:md0: read error not correctable
(sector 6242920 on sdo1).
Jan 17 20:30:34 file00bert kernel: md/raid:md0: read error not correctable
(sector 6242928 on sdo1).
Jan 17 20:30:34 file00bert kernel: md/raid:md0: read error not correctable
(sector 6242936 on sdo1).
Jan 17 20:30:34 file00bert kernel: md/raid:md0: read error not correctable
(sector 6242944 on sdo1).
Jan 17 20:30:34 file00bert kernel: md/raid:md0: read error not correctable
(sector 6242952 on sdo1).
Jan 17 20:30:34 file00bert kernel: md/raid:md0: read error not correctable
(sector 6242960 on sdo1).
Jan 17 20:30:34 file00bert kernel: md/raid:md0: read error not correctable
(sector 6242968 on sdo1).
Jan 17 20:30:34 file00bert kernel: md/raid:md0: read error not correctable
(sector 6242976 on sdo1).
Jan 17 20:30:34 file00bert kernel: md/raid:md0: read error not correctable
(sector 6242984 on sdo1).
Jan 17 20:30:34 file00bert kernel: md/raid:md0: read error not correctable
(sector 6242992 on sdo1).
Jan 17 20:30:34 file00bert kernel: md/raid:md0: read error not correctable
(sector 6243000 on sdo1).
Jan 17 20:30:34 file00bert kernel: md/raid:md0: read error not correctable
(sector 6243008 on sdo1).
Jan 17 20:30:34 file00bert kernel: md/raid:md0: read error not correctable
(sector 6243016 on sdo1).
Jan 17 20:30:34 file00bert kernel: md/raid:md0: read error not correctable
(sector 6243024 on sdo1).
Jan 17 20:30:34 file00bert kernel: md/raid:md0: read error not correctable
(sector 6243032 on sdo1).
Jan 17 20:30:34 file00bert kernel: md/raid:md0: read error not correctable
(sector 6243040 on sdo1).
Jan 17 20:30:34 file00bert kernel: sd 0:0:14:0: [sdo] Unhandled error code
Jan 17 20:30:34 file00bert kernel: sd 0:0:14:0: [sdo] Result:
hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
Jan 17 20:30:34 file00bert kernel: sd 0:0:14:0: [sdo] CDB: Read(10): 28 00
00 5f 43 27 00 00 18 00
Jan 17 20:30:34 file00bert kernel: md/raid:md0: read error not correctable
(sector 6243048 on sdo1).
Jan 17 20:30:34 file00bert kernel: md/raid:md0: read error not correctable
(sector 6243056 on sdo1).
Jan 17 20:30:34 file00bert kernel: md/raid:md0: read error not correctable
(sector 6243064 on sdo1).
Jan 17 20:30:34 file00bert kernel: sd 0:0:14:0: [sdo] Unhandled error code
Jan 17 20:30:34 file00bert kernel: sd 0:0:14:0: [sdo] Result:
hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
Jan 17 20:30:34 file00bert kernel: sd 0:0:14:0: [sdo] CDB: Read(10): 28 00
00 5f 43 3f 00 01 00 00
Jan 17 20:30:34 file00bert kernel: md/raid:md0: read error not correctable
(sector 6243072 on sdo1).
Jan 17 20:30:34 file00bert kernel: md/raid:md0: read error not correctable
(sector 6243080 on sdo1).
Jan 17 20:30:34 file00bert kernel: md/raid:md0: read error not correctable
(sector 6243088 on sdo1).
Jan 17 20:30:34 file00bert kernel: md/raid:md0: read error not correctable
(sector 6243096 on sdo1).
Jan 17 20:30:34 file00bert kernel: md/raid:md0: read error not correctable
(sector 6243104 on sdo1).
It goes on for a bit, then I get this...
Jan 17 20:30:34 file00bert kernel: md/raid:md0: read error not correctable
(sector 6242808 on sdo1).
Jan 17 20:30:34 file00bert kernel: sd 0:0:14:0: [sdo] Result:
hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
Jan 17 20:30:34 file00bert kernel: mpt2sas0: removing handle(0x0018),
sas_addr(0x5001517e3b0af0ae)
Jan 17 20:30:34 file00bert kernel: md: md0: recovery done.
Jan 17 20:30:34 file00bert kernel: md: unbind<sdo1>
Jan 17 20:30:34 file00bert kernel: md: export_rdev(sdo1)
Jan 17 20:30:36 file00bert kernel: scsi 0:0:178:0: Direct-Access ATA
ST3500630AS K PQ: 0 ANSI: 6
Jan 17 20:30:36 file00bert kernel: scsi 0:0:178:0: SATA: handle(0x0018),
sas_addr(0x5001517e3b0af0ae), phy(14), device_name(0x0000000000000000)
Jan 17 20:30:36 file00bert kernel: scsi 0:0:178:0: SATA:
enclosure_logical_id(0x5001517e3b0af0bf), slot(14)
Jan 17 20:30:36 file00bert kernel: scsi 0:0:178:0: atapi(n), ncq(y),
asyn_notify(n), smart(y), fua(y), sw_preserve(y)
Jan 17 20:30:36 file00bert kernel: scsi 0:0:178:0: qdepth(32), tagged(1),
simple(1), ordered(0), scsi_level(7), cmd_que(1)
Jan 17 20:30:36 file00bert kernel: sd 0:0:178:0: Attached scsi generic sg14
type 0
Jan 17 20:30:36 file00bert kernel: sd 0:0:178:0: [sdo] 976773168 512-byte
logical blocks: (500 GB/465 GiB)
Jan 17 20:30:36 file00bert kernel: sd 0:0:178:0: [sdo] Write Protect is off
Jan 17 20:30:36 file00bert kernel: sd 0:0:178:0: [sdo] Write cache: enabled,
read cache: enabled, supports DPO and FUA
Jan 17 20:30:36 file00bert kernel: sdo: sdo1
Jan 17 20:30:36 file00bert kernel: sd 0:0:178:0: [sdo] Attached SCSI disk
Jan 17 20:32:13 file00bert kernel: md: requested-resync of RAID array md0
Jan 17 20:32:13 file00bert kernel: md: minimum _guaranteed_ speed: 1000
KB/sec/disk.
Jan 17 20:32:13 file00bert kernel: md: using maximum available idle IO
bandwidth (but not more than 200000 KB/sec) for requested-resync.
Jan 17 20:32:13 file00bert kernel: md: using 128k window, over a total of
488383488k.
Jan 17 20:32:13 file00bert kernel: md: md0: requested-resync done.
Jan 17 20:38:27 file00bert kernel: [drm] nouveau 0000:05:03.0: Load detected
on output A
Jan 17 21:13:33 file00bert kernel: [drm] nouveau 0000:05:03.0: Setting dpms
mode 3 on vga encoder (output 0)
Jan 17 21:24:43 file00bert kernel: [drm] nouveau 0000:05:03.0: Setting dpms
mode 0 on vga encoder (output 0)
Jan 17 21:54:49 file00bert kernel: [drm] nouveau 0000:05:03.0: Setting dpms
mode 3 on vga encoder (output 0)
Jan 17 22:24:43 file00bert kernel: [drm] nouveau 0000:05:03.0: Setting dpms
mode 0 on vga encoder (output 0)
Jan 17 22:35:24 file00bert kernel: md: unbind<sde1>
Jan 17 22:35:24 file00bert kernel: md: export_rdev(sde1)
Jan 17 22:40:00 file00bert kernel: EXT4-fs error (device md0):
__ext4_get_inode_loc: unable to read inode block - inode=1041409,
block=1066401824
Jan 17 22:40:01 file00bert kernel: Buffer I/O error on device md0, logical
block 0
Jan 17 22:40:01 file00bert kernel: lost page write due to I/O error on md0
Jan 17 22:40:05 file00bert kernel: Aborting journal on device md0-8.
Jan 17 22:40:05 file00bert kernel: Buffer I/O error on device md0, logical
block 793280512
Jan 17 22:40:05 file00bert kernel: lost page write due to I/O error on md0
Jan 17 22:40:05 file00bert kernel: JBD2: I/O error detected when updating
journal superblock for md0-8.
Jan 17 22:40:17 file00bert kernel: EXT4-fs error (device md0):
ext4_journal_start_sb: Detected aborted journal
Jan 17 22:40:17 file00bert kernel: EXT4-fs (md0): Remounting filesystem
read-only
Jan 17 22:40:17 file00bert kernel: EXT4-fs error (device md0):
__ext4_get_inode_loc: unable to read inode block - inode=384003,
block=393216032
-----Original Message-----
From: Roman Mamedov [mailto:rm@romanrm.net]
Sent: Tuesday, January 20, 2015 11:52 AM
To: Graham Mitchell
Cc: linux-raid
Subject: Re: RAID 6 recovery issue
On Tue, 20 Jan 2015 11:46:45 -0500
"Graham Mitchell" <gmitch@woodlea.com> wrote:
> but for some reason it is now marked as a spare.
^^^^^^^^^^^^^^^
And you are not looking into 'dmesg' to find out why on purpose, to add some
mystery and suspense to the situation? :)
--
With respect,
Roman
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2015-01-20 18:32 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-01-20 16:46 RAID 6 recovery issue Graham Mitchell
2015-01-20 16:52 ` Roman Mamedov
2015-01-20 18:32 ` Graham Mitchell
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).