* help!
@ 2005-11-14 17:20 Shane Bishop
2005-11-14 18:55 ` help! Carlos Carvalho
0 siblings, 1 reply; 7+ messages in thread
From: Shane Bishop @ 2005-11-14 17:20 UTC (permalink / raw)
To: linux-raid
I had an mdadm device running fine, and had created my own scripts for
shutting it down and such. I upgraded my distro, and all of a sudden it
decided to start initializing md devices on it's own, which include one
that I want removed. The one that should be removed is throwing off the
numbering, otherwise I wouldn't care so much. There's nothing in
mdadm.conf, so I can only assume it's something with the kernel driver?
Any help would be appreciated.
Shane Bishop
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: help!
2005-11-14 17:20 help! Shane Bishop
@ 2005-11-14 18:55 ` Carlos Carvalho
2005-11-14 19:24 ` help! Shane Bishop
0 siblings, 1 reply; 7+ messages in thread
From: Carlos Carvalho @ 2005-11-14 18:55 UTC (permalink / raw)
To: linux-raid
Shane Bishop (sbishop@trinitybiblecollege.edu) wrote on 14 November 2005 11:20:
>I had an mdadm device running fine, and had created my own scripts for
>shutting it down and such. I upgraded my distro, and all of a sudden it
>decided to start initializing md devices on it's own, which include one
>that I want removed.
Probably the filesystem type in the partition table is set to raid
autodetect (fd). Try changing it to something else, for example 83.
Note that these are hexadecimal numbers.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: help!
2005-11-14 18:55 ` help! Carlos Carvalho
@ 2005-11-14 19:24 ` Shane Bishop
0 siblings, 0 replies; 7+ messages in thread
From: Shane Bishop @ 2005-11-14 19:24 UTC (permalink / raw)
To: linux-raid
Carlos Carvalho wrote:
>Shane Bishop (sbishop@trinitybiblecollege.edu) wrote on 14 November 2005 11:20:
> >I had an mdadm device running fine, and had created my own scripts for
> >shutting it down and such. I upgraded my distro, and all of a sudden it
> >decided to start initializing md devices on it's own, which include one
> >that I want removed.
>
>Probably the filesystem type in the partition table is set to raid
>autodetect (fd). Try changing it to something else, for example 83.
>Note that these are hexadecimal numbers.
>-
>To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
They were indeed set to raid autodetect. I was unaware of what that
actually did, I think I must have followed that step in a how-to when I
originally set it up. The odd thing is that these are the partition
pairs that were being mounted:
sda sdb
sda1 sdb1
sdc1 sdd1
The last 2 are the ones I actually wanted, but the first one was
something I had done when I was first playing around with it, if memory
serves me correct. Is that possible, or does it point to some other issue?
Shane
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: help!
@ 2005-11-15 14:29 Andrew Burgess
0 siblings, 0 replies; 7+ messages in thread
From: Andrew Burgess @ 2005-11-15 14:29 UTC (permalink / raw)
To: linux-raid
>> >I had an mdadm device running fine, and had created my own scripts for
>> >shutting it down and such. I upgraded my distro, and all of a sudden it
>> >decided to start initializing md devices on it's own, which include one
>> >that I want removed.
>>
>They were indeed set to raid autodetect. I was unaware of what that
>actually did, I think I must have followed that step in a how-to when I
>originally set it up. The odd thing is that these are the partition
>pairs that were being mounted:
>sda sdb
>sda1 sdb1
>sdc1 sdd1
>The last 2 are the ones I actually wanted, but the first one was
>something I had done when I was first playing around with it, if memory
>serves me correct. Is that possible, or does it point to some other issue?
If changing the partition type does not work then you might try
zeroing the superblock with 'mdadm --zero-superblock /dev/sda'
etc.
AFAIK the superblock and the mdadm.conf file are mdadms only
information sources. Remove them both and it should be impossible
for the unwanted raid devices to start.
A funny thing here is that I believe the superblock for sda and
sda1 are in exactly the same place, at the end of the disk, so I
can't see how it would find two different raid devices. What
version superblocks are you using? I think v1 superblocks are at
the beginning of the device and I'm not sure if the first disk
block for sda is the same as the first block for sda1...
HTH (and it could be all wrong)
^ permalink raw reply [flat|nested] 7+ messages in thread
* Help!
[not found] <1494909.124.1287759012019.JavaMail.root@asterisk>
@ 2010-10-22 14:51 ` Jesús Bermúdez
2010-10-26 19:45 ` Help! Janek Kozicki
2010-11-15 2:01 ` Help! Neil Brown
0 siblings, 2 replies; 7+ messages in thread
From: Jesús Bermúdez @ 2010-10-22 14:51 UTC (permalink / raw)
To: linux-raid
Hello all,
if you could help us, for we are completely desperated...
We have a raid5 with 3 disks that got out of sync due to a power failure. After tried to assemble (with mdadm --assemble --force /dev/md0) it says:
md: md0 stopped mdbind sdb2 bind sda2 bind sdc2
md: md0: array is not clean starting background reconstruction
raid 5 device sdc2 operational as raid disk 0
device sda2 operational as raid disk 2
device sdb2 operational as raid disk 1
allocated 32kb for md0
raid level 5 set md0 active with 3 out 3 devices algorithm 0
raid 5 conf printout
rd:3 wd:3
disk 0,0:1 /dev/sdc2
disk 1,0:1 /dev/sdb2
disk 2,0:1 /dev/sda2
md0: bitmap file is out of date 892893
forcing full recovery
md0: bitmap file is out of date doing full recovery
md0: bitmap initialisation failed: -5
md0: failed to create bitmap (-5)
md: pers->run() failed
mdadm: failed to run _array /dev/md0: input / output error
Tried to stop the array and reassemble it with:
mdadm --assemble --force --scan
mdadm --assemble --force --scan /dev/md0 /dev/sda2 /dev/sdb2 /dev/sdc2
mdadm --assemble --force --run /dev/md0 /dev/sda2 /dev/sdb2
mdadm --assemble --force --run /dev/md0 /dev/sda2 /dev/sdc2
mdadm --assemble --force --run /dev/md0 /dev/sdb2 /dev/sdc2
Tried to solve the bitmap problem with:
mdadm --grow /dev/md0 --bitmap=none
mdadm --grow /dev/md0 --bitmap=internal
mdadm --grow /dev/md0 --bitmap=none --force
mdadm --grow /dev/md0 --bitmap=internal --force
Tried to fake the 'clean' status of the array with:
echo "clean" > /sys/block/md0/md/array_state
Tried to boot the array from grub with:
md-mod.start_dirty_degraded=1
None of these commands have worked. Here are the details of the array and everyone of the disks:
-----------------------------------------------------------------------------------------------------------------------------------
mdadm -D /dev/md0
dev/md0
version: 01.00.03
creation time: august 28 18:58:39 2009
raid level: raid5
used dev size 83891328 (80:01 hid 85:90 gdb)
raid devices: 3
total drives: 3
preferred minor: 0
persistent: superblock is persistent
update time: tue oct 19 15:09 2010
Status: active, not started
Active devices: 3
Working devices: 3
Failed devices: 0
Spare device: 0
layout: left assimetric
chunk size: 128k
name: 0
UUID: ae9bd4fe:994ce882:4fa035e6:8094fc1a
events: 893
Number Major Minor RaidDevice State
0 8 34 0 active sync /dev/sdc2
1 8 18 1 active sync /dev/sdb2
3 8 2 2 active sync /dev/sda2
-----------------------------------------------------------------------------------------------------------------------------------
mdadm --examine /dev/sd[abc]2
/dev/sda2:
Magic:a92b4efc
Version:1,0
Featura Map:0x1
Array UUID:ae9bd4fe:994ce882:4fa035e6:8094fc1a
Name:0
Creation Time:Fri Aug 28 18:58:39 2009
Raid Level:Raid5
Raid Devices:3:Avail Dev Size:167782712 (80,01GIB 85,90 GB)
Array Size:335565312 (160,01 GIB 171,81 GB)
Used Dev Size:167782656 (80,01 GIB 85,90GB)
Super Offset:167782840 Sectors
State:Active
Device UUID:bbc156a7:6f3af82d:94714923:e212967a
Internal Bitmap:-81 sectors from superblock
Update Time:re Oct 19 15:49:08 2010
Checksum:54a35562 - correct
Events:893
Layout:left-asymmetric
Chunk Size:128K
Array Slot:3 (0. 1, failed, 2)
Array State:uuU 1 failed
/dev/sdb2:
Magic:a92b4efc
Version:1,0
Feature Map:0x1
Array UUID:ae9bd4fe:994ce882:4fa035e6:8094fc1a
Name:0
Creation Time:Fri Aug 28 18:58:39 2009
Raid Level:raid5
Raid Devices:3
Avail Dev Size:167782712 (80,01 GIB 85,90 GB)
Array Size:335565312 (160,01 GIB 171,81 GB)
Used Dev Size:167782656 (80,01 GIB 85,90 GB)
Super Offset:167782840 sectors
State:Active
Device UUID:d067101e:19056fdd:6b6e58fc:92128788
Internal Bimap:-81 sectors from superblock
Update Time:Tue Oct 19 15:49:08 2010
Checksum:61d3c2bf
Events:893
Layout:left-asymmetric
Chunk Size:128k
Array Slot:1 (0, 1, failed, 2)
Array State:uUu 1 failed
/dev/sdc2:
Magic:a92b4efc
Version:1,0
Featura Map:0x1
Array UUID:ae9bd4fe:994ce882:4fa035e6:8094fc1a
Name:0
Creation Time:Fri Aug 28 18:58:39 2009
Raid Level:raid5
Raid Devices:3
Avail Dev Size:167782712 (80,01 GIB 85,90 GB)
Array Size:335565312 (160,01 GIB 171,81 GB)
Used Dev Size:167782656 (80,01 GIB 85,90 GB)
Super Offset:167782840 sectors
State:active
Device UUID:0a1c2c74:04b9187f:6ab6b5cb:894d8b38
Internal Bimap:-81 sectos from superblock
Update Time:Tue Oct 19 15:49:08 2010
Checksum:d8faadc0 - correct
Events:893
Layout:left-asymmetric
Chunk Size:128K
Array Slot:0 (0, 1, failed, 2)
Array State:Uuu 1 failed
-----------------------------------------------------------------------------------------------------------------------------------
mdadm --examine-bitmap /dev/sd[abc]2
Filename /dev/sda2
Magic 6d746962
Version 4
UUID ae9bd4fe:994ce882:4fa035e6:8094fc1a
Events 892
Events Cleared 892
State Out of date
Chunksize 256KB
Daemon 5s flush period
Write Mode Normal
Sync Size 83891328 (80,01 GIB 85,90 GB)
Bitmap 327701 BITS (CHUNKS), 325633 DIRTY (99,4%)
Filename /dev/sdb2
Magic 6d746962
Version 4
UUID ae9bd4fe:994ce882:4fa035e6:8094fc1a
Events 892
Events Cleared 892
State Out of date
Chunksize 256KB
Daemon 5s flush period
Write Mode Normal
Sync Size 83891328 (80,01 GIB 85,90 GB)
Bitmap 327701 BITS (CHUNKS), 325633 DIRTY (99,4%)
Filename /dev/sdc2
Magic 6d746962
Version 4
UUID ae9bd4fe:994ce882:4fa035e6:8094fc1a
events 892
Events Cleared 892
State Out of date
Chunksize 256 KB
Daemon 5s flush period
Write Mode Normal
Sync Size 83891328 (80,01 GIB 85,90 GB)
Bitmap 327701 BITS (CHUNKS), 325633 DIRTY (99,4%)
-----------------------------------------------------------------------------------------------------------------------------------
cat /sys/bplock/md0/md/array_state
inactive
cat /sys/block/md0/md/degraded
cat /sys/block/md0/md/degraded: No such file or directory
cat /sys/block/md0/md/dev-sda2/errors
0
cat /sys/block/md0/md/dev-sda2/state
in_sync
cat /sys/block/md0/md/dev-sdb2/errors
24
cat /sys/block/md0/md/dev-sdb2/state
in_sync
cat /sys/block/md0/md/dev-sdc2/errors
0
cat /sys/block/md0/md/dev-sdc2/state
in_sync
-----------------------------------------------------------------------------------------------------------------------------------
Thanks in advance.
--
Jesus Bermudez Riquelme
Iten, S.L.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Help!
2010-10-22 14:51 ` Help! Jesús Bermúdez
@ 2010-10-26 19:45 ` Janek Kozicki
2010-11-15 2:01 ` Help! Neil Brown
1 sibling, 0 replies; 7+ messages in thread
From: Janek Kozicki @ 2010-10-26 19:45 UTC (permalink / raw)
To: linux-raid
what kernel version and mdadm version?
Jesús Bermúdez said: (by the date of Fri, 22 Oct 2010 16:51:20 +0200 (CEST))
> Hello all,
>
> if you could help us, for we are completely desperated...
>
> We have a raid5 with 3 disks that got out of sync due to a power failure. After tried to assemble (with mdadm --assemble --force /dev/md0) it says:
>
> md: md0 stopped mdbind sdb2 bind sda2 bind sdc2
>
> md: md0: array is not clean starting background reconstruction
>
> raid 5 device sdc2 operational as raid disk 0
> device sda2 operational as raid disk 2
> device sdb2 operational as raid disk 1
>
> allocated 32kb for md0
>
> raid level 5 set md0 active with 3 out 3 devices algorithm 0
>
> raid 5 conf printout
>
> rd:3 wd:3
>
> disk 0,0:1 /dev/sdc2
> disk 1,0:1 /dev/sdb2
> disk 2,0:1 /dev/sda2
>
> md0: bitmap file is out of date 892893
> forcing full recovery
>
> md0: bitmap file is out of date doing full recovery
>
> md0: bitmap initialisation failed: -5
>
> md0: failed to create bitmap (-5)
>
> md: pers->run() failed
>
> mdadm: failed to run _array /dev/md0: input / output error
>
> Tried to stop the array and reassemble it with:
>
> mdadm --assemble --force --scan
> mdadm --assemble --force --scan /dev/md0 /dev/sda2 /dev/sdb2 /dev/sdc2
> mdadm --assemble --force --run /dev/md0 /dev/sda2 /dev/sdb2
> mdadm --assemble --force --run /dev/md0 /dev/sda2 /dev/sdc2
> mdadm --assemble --force --run /dev/md0 /dev/sdb2 /dev/sdc2
>
> Tried to solve the bitmap problem with:
>
> mdadm --grow /dev/md0 --bitmap=none
> mdadm --grow /dev/md0 --bitmap=internal
> mdadm --grow /dev/md0 --bitmap=none --force
> mdadm --grow /dev/md0 --bitmap=internal --force
>
> Tried to fake the 'clean' status of the array with:
>
> echo "clean" > /sys/block/md0/md/array_state
>
> Tried to boot the array from grub with:
>
> md-mod.start_dirty_degraded=1
>
> None of these commands have worked. Here are the details of the array and everyone of the disks:
>
> -----------------------------------------------------------------------------------------------------------------------------------
> mdadm -D /dev/md0
>
> dev/md0
> version: 01.00.03
> creation time: august 28 18:58:39 2009
> raid level: raid5
> used dev size 83891328 (80:01 hid 85:90 gdb)
> raid devices: 3
> total drives: 3
> preferred minor: 0
> persistent: superblock is persistent
> update time: tue oct 19 15:09 2010
> Status: active, not started
> Active devices: 3
> Working devices: 3
> Failed devices: 0
> Spare device: 0
> layout: left assimetric
> chunk size: 128k
> name: 0
>
> UUID: ae9bd4fe:994ce882:4fa035e6:8094fc1a
> events: 893
>
> Number Major Minor RaidDevice State
> 0 8 34 0 active sync /dev/sdc2
> 1 8 18 1 active sync /dev/sdb2
> 3 8 2 2 active sync /dev/sda2
>
> -----------------------------------------------------------------------------------------------------------------------------------
> mdadm --examine /dev/sd[abc]2
>
>
> /dev/sda2:
>
> Magic:a92b4efc
> Version:1,0
> Featura Map:0x1
> Array UUID:ae9bd4fe:994ce882:4fa035e6:8094fc1a
> Name:0
> Creation Time:Fri Aug 28 18:58:39 2009
> Raid Level:Raid5
> Raid Devices:3:Avail Dev Size:167782712 (80,01GIB 85,90 GB)
> Array Size:335565312 (160,01 GIB 171,81 GB)
> Used Dev Size:167782656 (80,01 GIB 85,90GB)
> Super Offset:167782840 Sectors
> State:Active
> Device UUID:bbc156a7:6f3af82d:94714923:e212967a
> Internal Bitmap:-81 sectors from superblock
> Update Time:re Oct 19 15:49:08 2010
> Checksum:54a35562 - correct
> Events:893
> Layout:left-asymmetric
> Chunk Size:128K
> Array Slot:3 (0. 1, failed, 2)
> Array State:uuU 1 failed
>
>
>
> /dev/sdb2:
>
>
> Magic:a92b4efc
> Version:1,0
> Feature Map:0x1
> Array UUID:ae9bd4fe:994ce882:4fa035e6:8094fc1a
> Name:0
> Creation Time:Fri Aug 28 18:58:39 2009
> Raid Level:raid5
> Raid Devices:3
> Avail Dev Size:167782712 (80,01 GIB 85,90 GB)
> Array Size:335565312 (160,01 GIB 171,81 GB)
> Used Dev Size:167782656 (80,01 GIB 85,90 GB)
> Super Offset:167782840 sectors
> State:Active
> Device UUID:d067101e:19056fdd:6b6e58fc:92128788
> Internal Bimap:-81 sectors from superblock
> Update Time:Tue Oct 19 15:49:08 2010
> Checksum:61d3c2bf
> Events:893
> Layout:left-asymmetric
> Chunk Size:128k
> Array Slot:1 (0, 1, failed, 2)
> Array State:uUu 1 failed
>
> /dev/sdc2:
>
> Magic:a92b4efc
> Version:1,0
> Featura Map:0x1
> Array UUID:ae9bd4fe:994ce882:4fa035e6:8094fc1a
> Name:0
> Creation Time:Fri Aug 28 18:58:39 2009
> Raid Level:raid5
> Raid Devices:3
> Avail Dev Size:167782712 (80,01 GIB 85,90 GB)
> Array Size:335565312 (160,01 GIB 171,81 GB)
> Used Dev Size:167782656 (80,01 GIB 85,90 GB)
> Super Offset:167782840 sectors
> State:active
> Device UUID:0a1c2c74:04b9187f:6ab6b5cb:894d8b38
> Internal Bimap:-81 sectos from superblock
> Update Time:Tue Oct 19 15:49:08 2010
> Checksum:d8faadc0 - correct
> Events:893
> Layout:left-asymmetric
> Chunk Size:128K
> Array Slot:0 (0, 1, failed, 2)
> Array State:Uuu 1 failed
>
> -----------------------------------------------------------------------------------------------------------------------------------
>
> mdadm --examine-bitmap /dev/sd[abc]2
>
> Filename /dev/sda2
>
> Magic 6d746962
> Version 4
> UUID ae9bd4fe:994ce882:4fa035e6:8094fc1a
> Events 892
> Events Cleared 892
> State Out of date
> Chunksize 256KB
> Daemon 5s flush period
> Write Mode Normal
> Sync Size 83891328 (80,01 GIB 85,90 GB)
> Bitmap 327701 BITS (CHUNKS), 325633 DIRTY (99,4%)
>
> Filename /dev/sdb2
>
> Magic 6d746962
> Version 4
> UUID ae9bd4fe:994ce882:4fa035e6:8094fc1a
> Events 892
> Events Cleared 892
> State Out of date
> Chunksize 256KB
> Daemon 5s flush period
> Write Mode Normal
> Sync Size 83891328 (80,01 GIB 85,90 GB)
> Bitmap 327701 BITS (CHUNKS), 325633 DIRTY (99,4%)
>
> Filename /dev/sdc2
> Magic 6d746962
> Version 4
> UUID ae9bd4fe:994ce882:4fa035e6:8094fc1a
> events 892
> Events Cleared 892
>
> State Out of date
> Chunksize 256 KB
> Daemon 5s flush period
> Write Mode Normal
> Sync Size 83891328 (80,01 GIB 85,90 GB)
> Bitmap 327701 BITS (CHUNKS), 325633 DIRTY (99,4%)
>
> -----------------------------------------------------------------------------------------------------------------------------------
>
> cat /sys/bplock/md0/md/array_state
>
> inactive
>
>
> cat /sys/block/md0/md/degraded
>
> cat /sys/block/md0/md/degraded: No such file or directory
>
>
> cat /sys/block/md0/md/dev-sda2/errors
>
> 0
>
> cat /sys/block/md0/md/dev-sda2/state
>
> in_sync
>
>
> cat /sys/block/md0/md/dev-sdb2/errors
>
> 24
>
>
> cat /sys/block/md0/md/dev-sdb2/state
>
> in_sync
>
>
> cat /sys/block/md0/md/dev-sdc2/errors
>
> 0
>
>
> cat /sys/block/md0/md/dev-sdc2/state
>
> in_sync
> -----------------------------------------------------------------------------------------------------------------------------------
>
> Thanks in advance.
>
> --
> Jesus Bermudez Riquelme
>
>
> Iten, S.L.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
Janek Kozicki http://janek.kozicki.pl/ |
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Help!
2010-10-22 14:51 ` Help! Jesús Bermúdez
2010-10-26 19:45 ` Help! Janek Kozicki
@ 2010-11-15 2:01 ` Neil Brown
1 sibling, 0 replies; 7+ messages in thread
From: Neil Brown @ 2010-11-15 2:01 UTC (permalink / raw)
To: Jesús Bermúdez; +Cc: linux-raid
On Fri, 22 Oct 2010 16:51:20 +0200 (CEST)
Jesús Bermúdez <jbermudez@iten.es> wrote:
> Hello all,
>
> if you could help us, for we are completely desperated...
>
> We have a raid5 with 3 disks that got out of sync due to a power failure. After tried to assemble (with mdadm --assemble --force /dev/md0) it says:
>
> md: md0 stopped mdbind sdb2 bind sda2 bind sdc2
>
> md: md0: array is not clean starting background reconstruction
>
> raid 5 device sdc2 operational as raid disk 0
> device sda2 operational as raid disk 2
> device sdb2 operational as raid disk 1
>
> allocated 32kb for md0
>
> raid level 5 set md0 active with 3 out 3 devices algorithm 0
>
> raid 5 conf printout
>
> rd:3 wd:3
>
> disk 0,0:1 /dev/sdc2
> disk 1,0:1 /dev/sdb2
> disk 2,0:1 /dev/sda2
>
> md0: bitmap file is out of date 892893
> forcing full recovery
>
> md0: bitmap file is out of date doing full recovery
>
> md0: bitmap initialisation failed: -5
This (the "-5") strongly suggests that we got an error when trying to write to
the bitmap. But such errors normally appear in the kernel logs, yet you
don't report any.
Is this still a problem for you or have you found a solution?
You probably need to assemble the array without the device which is suffering
the write errors.
NeilBrown
>
> md0: failed to create bitmap (-5)
>
> md: pers->run() failed
>
> mdadm: failed to run _array /dev/md0: input / output error
>
> Tried to stop the array and reassemble it with:
>
> mdadm --assemble --force --scan
> mdadm --assemble --force --scan /dev/md0 /dev/sda2 /dev/sdb2 /dev/sdc2
> mdadm --assemble --force --run /dev/md0 /dev/sda2 /dev/sdb2
> mdadm --assemble --force --run /dev/md0 /dev/sda2 /dev/sdc2
> mdadm --assemble --force --run /dev/md0 /dev/sdb2 /dev/sdc2
>
> Tried to solve the bitmap problem with:
>
> mdadm --grow /dev/md0 --bitmap=none
> mdadm --grow /dev/md0 --bitmap=internal
> mdadm --grow /dev/md0 --bitmap=none --force
> mdadm --grow /dev/md0 --bitmap=internal --force
>
> Tried to fake the 'clean' status of the array with:
>
> echo "clean" > /sys/block/md0/md/array_state
>
> Tried to boot the array from grub with:
>
> md-mod.start_dirty_degraded=1
>
> None of these commands have worked. Here are the details of the array and everyone of the disks:
>
> -----------------------------------------------------------------------------------------------------------------------------------
> mdadm -D /dev/md0
>
> dev/md0
> version: 01.00.03
> creation time: august 28 18:58:39 2009
> raid level: raid5
> used dev size 83891328 (80:01 hid 85:90 gdb)
> raid devices: 3
> total drives: 3
> preferred minor: 0
> persistent: superblock is persistent
> update time: tue oct 19 15:09 2010
> Status: active, not started
> Active devices: 3
> Working devices: 3
> Failed devices: 0
> Spare device: 0
> layout: left assimetric
> chunk size: 128k
> name: 0
>
> UUID: ae9bd4fe:994ce882:4fa035e6:8094fc1a
> events: 893
>
> Number Major Minor RaidDevice State
> 0 8 34 0 active sync /dev/sdc2
> 1 8 18 1 active sync /dev/sdb2
> 3 8 2 2 active sync /dev/sda2
>
> -----------------------------------------------------------------------------------------------------------------------------------
> mdadm --examine /dev/sd[abc]2
>
>
> /dev/sda2:
>
> Magic:a92b4efc
> Version:1,0
> Featura Map:0x1
> Array UUID:ae9bd4fe:994ce882:4fa035e6:8094fc1a
> Name:0
> Creation Time:Fri Aug 28 18:58:39 2009
> Raid Level:Raid5
> Raid Devices:3:Avail Dev Size:167782712 (80,01GIB 85,90 GB)
> Array Size:335565312 (160,01 GIB 171,81 GB)
> Used Dev Size:167782656 (80,01 GIB 85,90GB)
> Super Offset:167782840 Sectors
> State:Active
> Device UUID:bbc156a7:6f3af82d:94714923:e212967a
> Internal Bitmap:-81 sectors from superblock
> Update Time:re Oct 19 15:49:08 2010
> Checksum:54a35562 - correct
> Events:893
> Layout:left-asymmetric
> Chunk Size:128K
> Array Slot:3 (0. 1, failed, 2)
> Array State:uuU 1 failed
>
>
>
> /dev/sdb2:
>
>
> Magic:a92b4efc
> Version:1,0
> Feature Map:0x1
> Array UUID:ae9bd4fe:994ce882:4fa035e6:8094fc1a
> Name:0
> Creation Time:Fri Aug 28 18:58:39 2009
> Raid Level:raid5
> Raid Devices:3
> Avail Dev Size:167782712 (80,01 GIB 85,90 GB)
> Array Size:335565312 (160,01 GIB 171,81 GB)
> Used Dev Size:167782656 (80,01 GIB 85,90 GB)
> Super Offset:167782840 sectors
> State:Active
> Device UUID:d067101e:19056fdd:6b6e58fc:92128788
> Internal Bimap:-81 sectors from superblock
> Update Time:Tue Oct 19 15:49:08 2010
> Checksum:61d3c2bf
> Events:893
> Layout:left-asymmetric
> Chunk Size:128k
> Array Slot:1 (0, 1, failed, 2)
> Array State:uUu 1 failed
>
> /dev/sdc2:
>
> Magic:a92b4efc
> Version:1,0
> Featura Map:0x1
> Array UUID:ae9bd4fe:994ce882:4fa035e6:8094fc1a
> Name:0
> Creation Time:Fri Aug 28 18:58:39 2009
> Raid Level:raid5
> Raid Devices:3
> Avail Dev Size:167782712 (80,01 GIB 85,90 GB)
> Array Size:335565312 (160,01 GIB 171,81 GB)
> Used Dev Size:167782656 (80,01 GIB 85,90 GB)
> Super Offset:167782840 sectors
> State:active
> Device UUID:0a1c2c74:04b9187f:6ab6b5cb:894d8b38
> Internal Bimap:-81 sectos from superblock
> Update Time:Tue Oct 19 15:49:08 2010
> Checksum:d8faadc0 - correct
> Events:893
> Layout:left-asymmetric
> Chunk Size:128K
> Array Slot:0 (0, 1, failed, 2)
> Array State:Uuu 1 failed
>
> -----------------------------------------------------------------------------------------------------------------------------------
>
> mdadm --examine-bitmap /dev/sd[abc]2
>
> Filename /dev/sda2
>
> Magic 6d746962
> Version 4
> UUID ae9bd4fe:994ce882:4fa035e6:8094fc1a
> Events 892
> Events Cleared 892
> State Out of date
> Chunksize 256KB
> Daemon 5s flush period
> Write Mode Normal
> Sync Size 83891328 (80,01 GIB 85,90 GB)
> Bitmap 327701 BITS (CHUNKS), 325633 DIRTY (99,4%)
>
> Filename /dev/sdb2
>
> Magic 6d746962
> Version 4
> UUID ae9bd4fe:994ce882:4fa035e6:8094fc1a
> Events 892
> Events Cleared 892
> State Out of date
> Chunksize 256KB
> Daemon 5s flush period
> Write Mode Normal
> Sync Size 83891328 (80,01 GIB 85,90 GB)
> Bitmap 327701 BITS (CHUNKS), 325633 DIRTY (99,4%)
>
> Filename /dev/sdc2
> Magic 6d746962
> Version 4
> UUID ae9bd4fe:994ce882:4fa035e6:8094fc1a
> events 892
> Events Cleared 892
>
> State Out of date
> Chunksize 256 KB
> Daemon 5s flush period
> Write Mode Normal
> Sync Size 83891328 (80,01 GIB 85,90 GB)
> Bitmap 327701 BITS (CHUNKS), 325633 DIRTY (99,4%)
>
> -----------------------------------------------------------------------------------------------------------------------------------
>
> cat /sys/bplock/md0/md/array_state
>
> inactive
>
>
> cat /sys/block/md0/md/degraded
>
> cat /sys/block/md0/md/degraded: No such file or directory
>
>
> cat /sys/block/md0/md/dev-sda2/errors
>
> 0
>
> cat /sys/block/md0/md/dev-sda2/state
>
> in_sync
>
>
> cat /sys/block/md0/md/dev-sdb2/errors
>
> 24
>
>
> cat /sys/block/md0/md/dev-sdb2/state
>
> in_sync
>
>
> cat /sys/block/md0/md/dev-sdc2/errors
>
> 0
>
>
> cat /sys/block/md0/md/dev-sdc2/state
>
> in_sync
> -----------------------------------------------------------------------------------------------------------------------------------
>
> Thanks in advance.
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2010-11-15 2:01 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <1494909.124.1287759012019.JavaMail.root@asterisk>
2010-10-22 14:51 ` Help! Jesús Bermúdez
2010-10-26 19:45 ` Help! Janek Kozicki
2010-11-15 2:01 ` Help! Neil Brown
2005-11-15 14:29 help! Andrew Burgess
-- strict thread matches above, loose matches on Subject: below --
2005-11-14 17:20 help! Shane Bishop
2005-11-14 18:55 ` help! Carlos Carvalho
2005-11-14 19:24 ` help! Shane Bishop
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).