* RAID 1 Disk added as faulty
@ 2009-12-26 7:33 Leslie Rhorer
2009-12-26 19:16 ` Leslie Rhorer
0 siblings, 1 reply; 8+ messages in thread
From: Leslie Rhorer @ 2009-12-26 7:33 UTC (permalink / raw)
To: 'linux-raid'
Merry Christmas all,
I have a bit of a puzzling problem. I have a set of three
partitions on a SATA hard drive all set up as second members of three RAID 1
arrays with the first members missing. I have a PATA drive partitioned in
precisely the same way as the SATA drive, and I am attempting to add the
PATA drive's partitions to the respective existing RAID 1 arrays. I can add
the partitions with `mdadm /dev/mdx --add /dev/hdax` (x = 1, 2, 3), and
everything works just fine. The thing is, however, for rather obvious
reasons I would prefer the PATA partitions to be write-mostly. If I add the
partitions to the array and then try to make the PATA partitions
write-mostly with the command `mdadm /dev/mdx -W /dev/hdax`, mdadm doesn't
complain, but it also doesn't appear to make the hdax partitions
write-mostly. (Or at least it does not report them as such.) OTOH, if I
start with a fresh partition with no existing superblock and use the command
`mdadm /dev/mdx --add -W /dev/hdax`, mdadm adds the partition and marks it
as write-mostly, but immediately fails the partition. If I remove the
partition and re-add it without zeroing the superblock, it again adds it as
a faulty spare:
/dev/md1:
Version : 01.00
Creation Time : Wed Dec 23 23:46:28 2009
Raid Level : raid1
Array Size : 401580 (392.23 MiB 411.22 MB)
Used Dev Size : 401580 (392.23 MiB 411.22 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 1
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sat Dec 26 01:12:41 2009
State : active, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 1
Spare Devices : 0
Name : 'RAID-Server':1
UUID : 76e8e11d:e0183c3c:404cb86a:19a7cb3d
Events : 188
Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 1 1 active sync /dev/sda1
2 3 1 - faulty writemostly spare /dev/hda1
How can I add these partitions as write-mostly members of the RAID 1
arrays?
^ permalink raw reply [flat|nested] 8+ messages in thread
* RE: RAID 1 Disk added as faulty
2009-12-26 7:33 RAID 1 Disk added as faulty Leslie Rhorer
@ 2009-12-26 19:16 ` Leslie Rhorer
2009-12-30 5:19 ` Leslie Rhorer
0 siblings, 1 reply; 8+ messages in thread
From: Leslie Rhorer @ 2009-12-26 19:16 UTC (permalink / raw)
To: 'linux-raid'
I have another, possibly related, question. I did a bit of snooping
around, and all six devices that are members of one of the RAID 1 arrays
report the same thing:
Array Slot : 1 (failed, 1, 0)
Array State : Uu 1 failed
(The array slot of course is either 1 or 2, depending on the
specific device.) Why does the superblock report something as being failed?
All the devices and the arrays themselves all show clean. Running a repair
on the array has no effect on the status. This same report is generated on
all twelve drive partitions that are part of RAID1 arrays. One one system,
both hard drives are PATA drives. On the other, one is PATA, the other
SATA. I only attempted to issue the -W directive on the mixed PATA / SATA
system.
> -----Original Message-----
> From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
> owner@vger.kernel.org] On Behalf Of Leslie Rhorer
> Sent: Saturday, December 26, 2009 1:33 AM
> To: 'linux-raid'
> Subject: RAID 1 Disk added as faulty
>
>
> Merry Christmas all,
>
> I have a bit of a puzzling problem. I have a set of three
> partitions on a SATA hard drive all set up as second members of three RAID
> 1
> arrays with the first members missing. I have a PATA drive partitioned in
> precisely the same way as the SATA drive, and I am attempting to add the
> PATA drive's partitions to the respective existing RAID 1 arrays. I can
> add
> the partitions with `mdadm /dev/mdx --add /dev/hdax` (x = 1, 2, 3), and
> everything works just fine. The thing is, however, for rather obvious
> reasons I would prefer the PATA partitions to be write-mostly. If I add
> the
> partitions to the array and then try to make the PATA partitions
> write-mostly with the command `mdadm /dev/mdx -W /dev/hdax`, mdadm doesn't
> complain, but it also doesn't appear to make the hdax partitions
> write-mostly. (Or at least it does not report them as such.) OTOH, if I
> start with a fresh partition with no existing superblock and use the
> command
> `mdadm /dev/mdx --add -W /dev/hdax`, mdadm adds the partition and marks it
> as write-mostly, but immediately fails the partition. If I remove the
> partition and re-add it without zeroing the superblock, it again adds it
> as
> a faulty spare:
>
> /dev/md1:
> Version : 01.00
> Creation Time : Wed Dec 23 23:46:28 2009
> Raid Level : raid1
> Array Size : 401580 (392.23 MiB 411.22 MB)
> Used Dev Size : 401580 (392.23 MiB 411.22 MB)
> Raid Devices : 2
> Total Devices : 2
> Preferred Minor : 1
> Persistence : Superblock is persistent
>
> Intent Bitmap : Internal
>
> Update Time : Sat Dec 26 01:12:41 2009
> State : active, degraded
> Active Devices : 1
> Working Devices : 1
> Failed Devices : 1
> Spare Devices : 0
>
> Name : 'RAID-Server':1
> UUID : 76e8e11d:e0183c3c:404cb86a:19a7cb3d
> Events : 188
>
> Number Major Minor RaidDevice State
> 0 0 0 0 removed
> 1 8 1 1 active sync /dev/sda1
>
> 2 3 1 - faulty writemostly spare
> /dev/hda1
>
>
> How can I add these partitions as write-mostly members of the RAID 1
> arrays?
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 8+ messages in thread
* RE: RAID 1 Disk added as faulty
2009-12-26 19:16 ` Leslie Rhorer
@ 2009-12-30 5:19 ` Leslie Rhorer
2009-12-30 5:42 ` Neil Brown
0 siblings, 1 reply; 8+ messages in thread
From: Leslie Rhorer @ 2009-12-30 5:19 UTC (permalink / raw)
To: 'linux-raid'
Hello? Anyone? Can someone tell me why mdadm won't add the PATA
drive as Write mostly, or why it claims one of the members is failed?
> I have another, possibly related, question. I did a bit of snooping
> around, and all six devices that are members of one of the RAID 1 arrays
> report the same thing:
>
> Array Slot : 1 (failed, 1, 0)
> Array State : Uu 1 failed
>
> (The array slot of course is either 1 or 2, depending on the
> specific device.) Why does the superblock report something as being
> failed?
> All the devices and the arrays themselves all show clean. Running a
> repair
> on the array has no effect on the status. This same report is generated
> on
> all twelve drive partitions that are part of RAID1 arrays. One one
> system,
> both hard drives are PATA drives. On the other, one is PATA, the other
> SATA. I only attempted to issue the -W directive on the mixed PATA / SATA
> system.
>
> > -----Original Message-----
> > From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
> > owner@vger.kernel.org] On Behalf Of Leslie Rhorer
> > Sent: Saturday, December 26, 2009 1:33 AM
> > To: 'linux-raid'
> > Subject: RAID 1 Disk added as faulty
> >
> >
> > Merry Christmas all,
> >
> > I have a bit of a puzzling problem. I have a set of three
> > partitions on a SATA hard drive all set up as second members of three
> RAID
> > 1
> > arrays with the first members missing. I have a PATA drive partitioned
> in
> > precisely the same way as the SATA drive, and I am attempting to add the
> > PATA drive's partitions to the respective existing RAID 1 arrays. I can
> > add
> > the partitions with `mdadm /dev/mdx --add /dev/hdax` (x = 1, 2, 3), and
> > everything works just fine. The thing is, however, for rather obvious
> > reasons I would prefer the PATA partitions to be write-mostly. If I add
> > the
> > partitions to the array and then try to make the PATA partitions
> > write-mostly with the command `mdadm /dev/mdx -W /dev/hdax`, mdadm
> doesn't
> > complain, but it also doesn't appear to make the hdax partitions
> > write-mostly. (Or at least it does not report them as such.) OTOH, if
> I
> > start with a fresh partition with no existing superblock and use the
> > command
> > `mdadm /dev/mdx --add -W /dev/hdax`, mdadm adds the partition and marks
> it
> > as write-mostly, but immediately fails the partition. If I remove the
> > partition and re-add it without zeroing the superblock, it again adds it
> > as
> > a faulty spare:
> >
> > /dev/md1:
> > Version : 01.00
> > Creation Time : Wed Dec 23 23:46:28 2009
> > Raid Level : raid1
> > Array Size : 401580 (392.23 MiB 411.22 MB)
> > Used Dev Size : 401580 (392.23 MiB 411.22 MB)
> > Raid Devices : 2
> > Total Devices : 2
> > Preferred Minor : 1
> > Persistence : Superblock is persistent
> >
> > Intent Bitmap : Internal
> >
> > Update Time : Sat Dec 26 01:12:41 2009
> > State : active, degraded
> > Active Devices : 1
> > Working Devices : 1
> > Failed Devices : 1
> > Spare Devices : 0
> >
> > Name : 'RAID-Server':1
> > UUID : 76e8e11d:e0183c3c:404cb86a:19a7cb3d
> > Events : 188
> >
> > Number Major Minor RaidDevice State
> > 0 0 0 0 removed
> > 1 8 1 1 active sync /dev/sda1
> >
> > 2 3 1 - faulty writemostly spare
> > /dev/hda1
> >
> >
> > How can I add these partitions as write-mostly members of the RAID 1
> > arrays?
> >
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: RAID 1 Disk added as faulty
2009-12-30 5:19 ` Leslie Rhorer
@ 2009-12-30 5:42 ` Neil Brown
2009-12-30 6:39 ` Leslie Rhorer
0 siblings, 1 reply; 8+ messages in thread
From: Neil Brown @ 2009-12-30 5:42 UTC (permalink / raw)
To: Leslie Rhorer; +Cc: 'linux-raid'
On Tue, 29 Dec 2009 23:19:55 -0600
"Leslie Rhorer" <lrhorer@satx.rr.com> wrote:
>
> Hello? Anyone? Can someone tell me why mdadm won't add the PATA
> drive as Write mostly, or why it claims one of the members is failed?
The "failed" is because once there was a device in the array that it
thinks failed. The message is not helpful and newer versions of mdadm
do not include it.
As for why you cannot add the PATA drive - it seems to fail, implying a
write error when writing to it???
More info (logs etc) required.
NeilBrown
>
> > I have another, possibly related, question. I did a bit of snooping
> > around, and all six devices that are members of one of the RAID 1 arrays
> > report the same thing:
> >
> > Array Slot : 1 (failed, 1, 0)
> > Array State : Uu 1 failed
> >
> > (The array slot of course is either 1 or 2, depending on the
> > specific device.) Why does the superblock report something as being
> > failed?
> > All the devices and the arrays themselves all show clean. Running a
> > repair
> > on the array has no effect on the status. This same report is generated
> > on
> > all twelve drive partitions that are part of RAID1 arrays. One one
> > system,
> > both hard drives are PATA drives. On the other, one is PATA, the other
> > SATA. I only attempted to issue the -W directive on the mixed PATA / SATA
> > system.
> >
> > > -----Original Message-----
> > > From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
> > > owner@vger.kernel.org] On Behalf Of Leslie Rhorer
> > > Sent: Saturday, December 26, 2009 1:33 AM
> > > To: 'linux-raid'
> > > Subject: RAID 1 Disk added as faulty
> > >
> > >
> > > Merry Christmas all,
> > >
> > > I have a bit of a puzzling problem. I have a set of three
> > > partitions on a SATA hard drive all set up as second members of three
> > RAID
> > > 1
> > > arrays with the first members missing. I have a PATA drive partitioned
> > in
> > > precisely the same way as the SATA drive, and I am attempting to add the
> > > PATA drive's partitions to the respective existing RAID 1 arrays. I can
> > > add
> > > the partitions with `mdadm /dev/mdx --add /dev/hdax` (x = 1, 2, 3), and
> > > everything works just fine. The thing is, however, for rather obvious
> > > reasons I would prefer the PATA partitions to be write-mostly. If I add
> > > the
> > > partitions to the array and then try to make the PATA partitions
> > > write-mostly with the command `mdadm /dev/mdx -W /dev/hdax`, mdadm
> > doesn't
> > > complain, but it also doesn't appear to make the hdax partitions
> > > write-mostly. (Or at least it does not report them as such.) OTOH, if
> > I
> > > start with a fresh partition with no existing superblock and use the
> > > command
> > > `mdadm /dev/mdx --add -W /dev/hdax`, mdadm adds the partition and marks
> > it
> > > as write-mostly, but immediately fails the partition. If I remove the
> > > partition and re-add it without zeroing the superblock, it again adds it
> > > as
> > > a faulty spare:
> > >
> > > /dev/md1:
> > > Version : 01.00
> > > Creation Time : Wed Dec 23 23:46:28 2009
> > > Raid Level : raid1
> > > Array Size : 401580 (392.23 MiB 411.22 MB)
> > > Used Dev Size : 401580 (392.23 MiB 411.22 MB)
> > > Raid Devices : 2
> > > Total Devices : 2
> > > Preferred Minor : 1
> > > Persistence : Superblock is persistent
> > >
> > > Intent Bitmap : Internal
> > >
> > > Update Time : Sat Dec 26 01:12:41 2009
> > > State : active, degraded
> > > Active Devices : 1
> > > Working Devices : 1
> > > Failed Devices : 1
> > > Spare Devices : 0
> > >
> > > Name : 'RAID-Server':1
> > > UUID : 76e8e11d:e0183c3c:404cb86a:19a7cb3d
> > > Events : 188
> > >
> > > Number Major Minor RaidDevice State
> > > 0 0 0 0 removed
> > > 1 8 1 1 active sync /dev/sda1
> > >
> > > 2 3 1 - faulty writemostly spare
> > > /dev/hda1
> > >
> > >
> > > How can I add these partitions as write-mostly members of the RAID 1
> > > arrays?
> > >
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> > > the body of a message to majordomo@vger.kernel.org
> > > More majordomo info at http://vger.kernel.org/majordomo-info.html
> >
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 8+ messages in thread
* RE: RAID 1 Disk added as faulty
2009-12-30 5:42 ` Neil Brown
@ 2009-12-30 6:39 ` Leslie Rhorer
2009-12-30 7:18 ` Leslie Rhorer
0 siblings, 1 reply; 8+ messages in thread
From: Leslie Rhorer @ 2009-12-30 6:39 UTC (permalink / raw)
To: 'Neil Brown'; +Cc: 'linux-raid'
> As for why you cannot add the PATA drive - it seems to fail, implying a
> write error when writing to it???
It doesn't seem to fail. I can add it without the Write mostly
switch and it does fine...
Well, I'll be hornswaggled! I just tried it again and this time it
worked. I tried it several times before on all three partitions, and it
would always mark the partition faulty when I used the -W switch.
> More info (logs etc) required.
When I looked previously and again just now, I don't see anything
unusual from then:
Dec 23 11:36:24 Backup kernel: [ 551.023531] md: recovery of RAID array md1
Dec 23 11:36:24 Backup kernel: [ 551.023531] md: minimum _guaranteed_
speed: 1000 KB/sec/disk.
Dec 23 11:36:24 Backup kernel: [ 551.023531] md: using maximum available
idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
Dec 23 11:36:24 Backup kernel: [ 551.023531] md: using 128k window, over a
total of 6144816 blocks.
Dec 23 11:37:04 Backup kernel: [ 595.374568] md: bind<hda2>
Dec 23 11:37:05 Backup kernel: [ 595.686827] md: delaying recovery of md2
until md1 has finished (they share one or more physical units)
Dec 23 11:37:16 Backup kernel: [ 608.666908] md: bind<hda3>
Dec 23 11:37:17 Backup kernel: [ 608.951395] md: delaying recovery of md3
until md1 has finished (they share one or more physical units)
Dec 23 11:43:28 Backup kernel: [ 1014.009182] md: md1: recovery done.
Dec 23 11:43:28 Backup kernel: [ 1014.117880] md: delaying recovery of md3
until md2 has finished (they share one or more physical units)
Dec 23 11:43:28 Backup kernel: [ 1014.117493] md: recovery of RAID array md2
Dec 23 11:43:28 Backup kernel: [ 1014.117496] md: minimum _guaranteed_
speed: 1000 KB/sec/disk.
Dec 23 11:43:28 Backup kernel: [ 1014.117498] md: using maximum available
idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
Dec 23 11:43:28 Backup kernel: [ 1014.117502] md: using 128k window, over a
total of 277442414 blocks.
Dec 23 18:11:30 Backup kernel: [25686.451020] md: md2: recovery done.
Dec 23 18:11:30 Backup kernel: [25686.627333] md: recovery of RAID array md3
Dec 23 18:11:30 Backup kernel: [25686.627333] md: minimum _guaranteed_
speed: 1000 KB/sec/disk.
Dec 23 18:11:30 Backup kernel: [25686.627333] md: using maximum available
idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
Dec 23 18:11:30 Backup kernel: [25686.627333] md: using 128k window, over a
total of 204796548 blocks.
Dec 23 22:16:36 Backup kernel: [41884.541659] md: md3: recovery done.
^ permalink raw reply [flat|nested] 8+ messages in thread
* RE: RAID 1 Disk added as faulty
2009-12-30 6:39 ` Leslie Rhorer
@ 2009-12-30 7:18 ` Leslie Rhorer
2009-12-30 8:52 ` Neil Brown
0 siblings, 1 reply; 8+ messages in thread
From: Leslie Rhorer @ 2009-12-30 7:18 UTC (permalink / raw)
To: 'Neil Brown'; +Cc: 'linux-raid'
> > As for why you cannot add the PATA drive - it seems to fail, implying a
> > write error when writing to it???
>
> It doesn't seem to fail. I can add it without the Write mostly
> switch and it does fine...
>
> Well, I'll be hornswaggled! I just tried it again and this time it
> worked. I tried it several times before on all three partitions, and it
> would always mark the partition faulty when I used the -W switch.
>
> > More info (logs etc) required.
>
> When I looked previously and again just now, I don't see anything
> unusual from then:
>
> Dec 23 11:36:24 Backup kernel: [ 551.023531] md: recovery of RAID array
> md1
> Dec 23 11:36:24 Backup kernel: [ 551.023531] md: minimum _guaranteed_
> speed: 1000 KB/sec/disk.
> Dec 23 11:36:24 Backup kernel: [ 551.023531] md: using maximum available
> idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
> Dec 23 11:36:24 Backup kernel: [ 551.023531] md: using 128k window, over
> a
> total of 6144816 blocks.
> Dec 23 11:37:04 Backup kernel: [ 595.374568] md: bind<hda2>
> Dec 23 11:37:05 Backup kernel: [ 595.686827] md: delaying recovery of md2
> until md1 has finished (they share one or more physical units)
> Dec 23 11:37:16 Backup kernel: [ 608.666908] md: bind<hda3>
> Dec 23 11:37:17 Backup kernel: [ 608.951395] md: delaying recovery of md3
> until md1 has finished (they share one or more physical units)
> Dec 23 11:43:28 Backup kernel: [ 1014.009182] md: md1: recovery done.
> Dec 23 11:43:28 Backup kernel: [ 1014.117880] md: delaying recovery of md3
> until md2 has finished (they share one or more physical units)
> Dec 23 11:43:28 Backup kernel: [ 1014.117493] md: recovery of RAID array
> md2
> Dec 23 11:43:28 Backup kernel: [ 1014.117496] md: minimum _guaranteed_
> speed: 1000 KB/sec/disk.
> Dec 23 11:43:28 Backup kernel: [ 1014.117498] md: using maximum available
> idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
> Dec 23 11:43:28 Backup kernel: [ 1014.117502] md: using 128k window, over
> a
> total of 277442414 blocks.
> Dec 23 18:11:30 Backup kernel: [25686.451020] md: md2: recovery done.
> Dec 23 18:11:30 Backup kernel: [25686.627333] md: recovery of RAID array
> md3
> Dec 23 18:11:30 Backup kernel: [25686.627333] md: minimum _guaranteed_
> speed: 1000 KB/sec/disk.
> Dec 23 18:11:30 Backup kernel: [25686.627333] md: using maximum available
> idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
> Dec 23 18:11:30 Backup kernel: [25686.627333] md: using 128k window, over
> a
> total of 204796548 blocks.
> Dec 23 22:16:36 Backup kernel: [41884.541659] md: md3: recovery done.
Oops! 'Sorry, I was on the wrong server. The server with the PATA
drives also allowed me to add with a -W switch this time, but the logs on it
show even less:
RAID-Server:/tmp# grep "Dec 26 01:1" /var/log/kern.log.1
Dec 26 01:11:16 RAID-Server kernel: [22268.711641] md: unbind<hda1>
Dec 26 01:11:16 RAID-Server kernel: [22268.711641] md: export_rdev(hda1)
Dec 26 01:11:30 RAID-Server kernel: [22284.169577] md: bind<hda1>
Dec 26 01:12:13 RAID-Server kernel: [22330.315824] md: unbind<hda1>
Dec 26 01:12:13 RAID-Server kernel: [22330.315824] md: export_rdev(hda1)
Dec 26 01:12:41 RAID-Server kernel: [22360.766707] md: bind<hda1>
Dec 26 01:15:05 RAID-Server kernel: [22527.168622] md: unbind<hda1>
Dec 26 01:15:05 RAID-Server kernel: [22527.168622] md: export_rdev(hda1)
And then when I issued the command without the -W
Dec 26 01:15:46 RAID-Server kernel: [22575.663301] md: bind<hda1>
Dec 26 01:15:46 RAID-Server kernel: [22575.683991] RAID1 conf printout:
Dec 26 01:15:46 RAID-Server kernel: [22575.683994] --- wd:1 rd:2
Dec 26 01:15:46 RAID-Server kernel: [22575.683996] disk 0, wo:1, o:1,
dev:hda1
Dec 26 01:15:46 RAID-Server kernel: [22575.683997] disk 1, wo:0, o:1,
dev:sda1
Dec 26 01:15:46 RAID-Server kernel: [22575.684060] md: recovery of RAID
array md1
Dec 26 01:15:46 RAID-Server kernel: [22575.684067] md: minimum _guaranteed_
speed: 1000 KB/sec/disk.
Dec 26 01:15:46 RAID-Server kernel: [22575.684069] md: using maximum
available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
Dec 26 01:15:46 RAID-Server kernel: [22575.684072] md: using 128k window,
over a total of 401580 blocks.
Dec 26 01:15:55 RAID-Server kernel: [22586.500912] md: md1: recovery done.
Dec 26 01:15:55 RAID-Server kernel: [22586.734405] RAID1 conf printout:
Dec 26 01:15:55 RAID-Server kernel: [22586.734410] --- wd:2 rd:2
Dec 26 01:15:55 RAID-Server kernel: [22586.734413] disk 0, wo:0, o:1,
dev:hda1
Dec 26 01:15:55 RAID-Server kernel: [22586.734415] disk 1, wo:0, o:1,
dev:sda1
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: RAID 1 Disk added as faulty
2009-12-30 7:18 ` Leslie Rhorer
@ 2009-12-30 8:52 ` Neil Brown
2009-12-30 15:31 ` Leslie Rhorer
0 siblings, 1 reply; 8+ messages in thread
From: Neil Brown @ 2009-12-30 8:52 UTC (permalink / raw)
To: Leslie Rhorer; +Cc: 'linux-raid'
On Wed, 30 Dec 2009 01:18:32 -0600
"Leslie Rhorer" <lrhorer@satx.rr.com> wrote:
> Oops! 'Sorry, I was on the wrong server. The server with the PATA
> drives also allowed me to add with a -W switch this time, but the logs on it
> show even less:
:-)
Can you do:
mdadm /dev/md1 -f /dev/hda1
mdadm /dev/md1 -r /dev/hda1
mdadm -D /dev/md1
mdadm -E /dev/hda1
mdadm /dev/md1 -a -W /dev/hda1
mdadm -D /dev/md1
mdadm -E /dev/hda1
and show me all the output along with kernel log messages
generated at the time.
NeilBrown
^ permalink raw reply [flat|nested] 8+ messages in thread
* RE: RAID 1 Disk added as faulty
2009-12-30 8:52 ` Neil Brown
@ 2009-12-30 15:31 ` Leslie Rhorer
0 siblings, 0 replies; 8+ messages in thread
From: Leslie Rhorer @ 2009-12-30 15:31 UTC (permalink / raw)
To: 'Neil Brown'; +Cc: 'linux-raid'
> On Wed, 30 Dec 2009 01:18:32 -0600
> "Leslie Rhorer" <lrhorer@satx.rr.com> wrote:
>
> > Oops! 'Sorry, I was on the wrong server. The server with the PATA
> > drives also allowed me to add with a -W switch this time, but the logs
> on it
> > show even less:
>
> :-)
>
> Can you do:
>
> mdadm /dev/md1 -f /dev/hda1
> mdadm /dev/md1 -r /dev/hda1
> mdadm -D /dev/md1
> mdadm -E /dev/hda1
> mdadm /dev/md1 -a -W /dev/hda1
> mdadm -D /dev/md1
> mdadm -E /dev/hda1
>
> and show me all the output along with kernel log messages
> generated at the time.
Well, like I said, it's working now, but OK. In the log in the
previous message, that is precisely what I did. At the command line,
everything looked perfectly normal until I checked the status of the array,
where it showed the devices failed on all three patitions. In the log, all
it showed were the bind and unbind statements. 'No errors, no status
statements, nothing. Anyway, here is the output now that it is working:
RAID-Server:/tmp# mdadm /dev/md1 -f /dev/hda1
mdadm: set /dev/hda1 faulty in /dev/md1
RAID-Server:/tmp# mdadm /dev/md1 -r /dev/hda1
mdadm: hot removed /dev/hda1
RAID-Server:/tmp# mdadm -D /dev/md1
/dev/md1:
Version : 01.00
Creation Time : Wed Dec 23 23:46:28 2009
Raid Level : raid1
Array Size : 401580 (392.23 MiB 411.22 MB)
Used Dev Size : 401580 (392.23 MiB 411.22 MB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 1
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Wed Dec 30 09:18:58 2009
State : active, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Name : 'RAID-Server':1
UUID : 76e8e11d:e0183c3c:404cb86a:19a7cb3d
Events : 218
Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 1 1 active sync /dev/sda1
RAID-Server:/tmp# mdadm -E /dev/hda1
/dev/hda1:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x1
Array UUID : 76e8e11d:e0183c3c:404cb86a:19a7cb3d
Name : 'RAID-Server':1
Creation Time : Wed Dec 23 23:46:28 2009
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 803160 (392.23 MiB 411.22 MB)
Array Size : 803160 (392.23 MiB 411.22 MB)
Super Offset : 803168 sectors
State : clean
Device UUID : 28fa09ed:07bf99e2:e3a3b396:9fe389d3
Internal Bitmap : 2 sectors from superblock
Update Time : Wed Dec 30 07:55:41 2009
Checksum : 9d6dd6e5 - correct
Events : 214
Array Slot : 2 (failed, 1, 0)
Array State : Uu 1 failed
RAID-Server:/tmp# mdadm /dev/md1 -a -W /dev/hda1
mdadm: re-added /dev/hda1
RAID-Server:/tmp# mdadm -D /dev/md1
/dev/md1:
Version : 01.00
Creation Time : Wed Dec 23 23:46:28 2009
Raid Level : raid1
Array Size : 401580 (392.23 MiB 411.22 MB)
Used Dev Size : 401580 (392.23 MiB 411.22 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 1
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Wed Dec 30 09:18:58 2009
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Name : 'RAID-Server':1
UUID : 76e8e11d:e0183c3c:404cb86a:19a7cb3d
Events : 226
Number Major Minor RaidDevice State
2 3 1 0 active sync writemostly /dev/hda1
1 8 1 1 active sync /dev/sda1
RAID-Server:/tmp# mdadm -E /dev/hda1
/dev/hda1:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x1
Array UUID : 76e8e11d:e0183c3c:404cb86a:19a7cb3d
Name : 'RAID-Server':1
Creation Time : Wed Dec 23 23:46:28 2009
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 803160 (392.23 MiB 411.22 MB)
Array Size : 803160 (392.23 MiB 411.22 MB)
Super Offset : 803168 sectors
State : clean
Device UUID : 28fa09ed:07bf99e2:e3a3b396:9fe389d3
Internal Bitmap : 2 sectors from superblock
Update Time : Wed Dec 30 09:18:58 2009
Checksum : 9d6dea76 - correct
Events : 226
Array Slot : 2 (failed, 1, 0)
Array State : Uu 1 failed
RAID-Server:/tmp# tail -n 80 /var/log/syslog
Dec 30 09:18:58 RAID-Server kernel: [424760.811675] raid1: Disk failure on
hda1, disabling device.
Dec 30 09:18:58 RAID-Server kernel: [424760.811675] raid1: Operation
continuing on 1 devices.
Dec 30 09:18:58 RAID-Server kernel: [424760.818778] RAID1 conf printout:
Dec 30 09:18:58 RAID-Server kernel: [424760.818778] --- wd:1 rd:2
Dec 30 09:18:58 RAID-Server kernel: [424760.818778] disk 0, wo:1, o:0,
dev:hda1
Dec 30 09:18:58 RAID-Server kernel: [424760.818778] disk 1, wo:0, o:1,
dev:sda1
Dec 30 09:18:58 RAID-Server kernel: [424760.839325] RAID1 conf printout:
Dec 30 09:18:58 RAID-Server kernel: [424760.839328] --- wd:1 rd:2
Dec 30 09:18:58 RAID-Server kernel: [424760.839330] disk 1, wo:0, o:1,
dev:sda1
Dec 30 09:18:58 RAID-Server kernel: [424760.842763] md: unbind<hda1>
Dec 30 09:18:58 RAID-Server kernel: [424760.842763] md: export_rdev(hda1)
Dec 30 09:18:58 RAID-Server kernel: [424761.251480] md: bind<hda1>
Dec 30 09:18:58 RAID-Server kernel: [424761.272603] RAID1 conf printout:
Dec 30 09:18:58 RAID-Server kernel: [424761.272603] --- wd:1 rd:2
Dec 30 09:18:58 RAID-Server kernel: [424761.272603] disk 0, wo:1, o:1,
dev:hda1
Dec 30 09:18:58 RAID-Server kernel: [424761.272603] disk 1, wo:0, o:1,
dev:sda1
Dec 30 09:18:58 RAID-Server kernel: [424761.275141] md: recovery of RAID
array md1
Dec 30 09:18:58 RAID-Server kernel: [424761.275141] md: minimum _guaranteed_
speed: 1000 KB/sec/disk.
Dec 30 09:18:58 RAID-Server kernel: [424761.275141] md: using maximum
available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
Dec 30 09:18:58 RAID-Server kernel: [424761.275141] md: using 128k window,
over a total of 401580 blocks.
Dec 30 09:18:58 RAID-Server kernel: [424761.278610] md: md1: recovery done.
Dec 30 09:18:58 RAID-Server kernel: [424761.294051] RAID1 conf printout:
Dec 30 09:18:58 RAID-Server kernel: [424761.294051] --- wd:2 rd:2
Dec 30 09:18:58 RAID-Server kernel: [424761.294051] disk 0, wo:0, o:1,
dev:hda1
Dec 30 09:18:58 RAID-Server kernel: [424761.294051] disk 1, wo:0, o:1,
dev:sda1
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2009-12-30 15:31 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-12-26 7:33 RAID 1 Disk added as faulty Leslie Rhorer
2009-12-26 19:16 ` Leslie Rhorer
2009-12-30 5:19 ` Leslie Rhorer
2009-12-30 5:42 ` Neil Brown
2009-12-30 6:39 ` Leslie Rhorer
2009-12-30 7:18 ` Leslie Rhorer
2009-12-30 8:52 ` Neil Brown
2009-12-30 15:31 ` Leslie Rhorer
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).