* reaid problem at reboot
@ 2011-06-21 13:14 william L'Heureux
2011-06-21 20:18 ` Phil Turmel
0 siblings, 1 reply; 11+ messages in thread
From: william L'Heureux @ 2011-06-21 13:14 UTC (permalink / raw)
To: linux-raid
Hi,
before we start, please note that im not a pro at mdadm or lvm, and dont now the full terminology neither the full how it works.
1 week ago, 1 of my raid6 went wrong. it didnt have any superblock anymore. I know all the drives that belongs to that raid but
i
dont know the right order of the drives. I tried a python script with
assemble and permutation all combination possible. it then checks
the
header and do a e2fsck -n to check errors. Everytime i run the script i
get several and different results on each run. Some say that
the
order of the drives doesnt matter, i agree but withotuh superblock its
another story. I got another raid6 which is fine and on top of those
2
raid6, there is a lvm. if you have any question or output you want to
see, please tell me and i will provide further informations as soon as i
can
if you dont understand me, i apologize, im trying to be better at english everyday.
Thanks
William
sudo cat /dev/.mdadm/map
md0 1.2 85c93553:f5707fc9:b777b1e6:a0427190 /dev/md0
md1 1.2 32d61f75:647daf4b:c1ee26ec:bc926504 /dev/md1
cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid6 sdd1[5] sda1[4] sdi1[3] sdh1[2] sdj1[1] sdc1[0]
7809854976 blocks super 1.2 level 6, 128k chunk, algorithm 2 [6/6] [UUUUUU]
md1 : active raid6 sdk1[0] sdg1[7] sde1[6] sdf1[5] sdm1[8] sdb1[1]
7809855488 blocks super 1.2 level 6, 128k chunk, algorithm 2 [6/6] [UUUUUU]
bitmap: 0/15 pages [0KB], 65536KB chunk
mount -r /mnt/data0
mount: wrong fs type, bad option, bad superblock on /dev/mapper/vgRAID60-data0,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR jeromepoulin@gmail.com
# definitions of existing MD arrays
#ARRAY /dev/md/0 metadata=1.2 UUID=d2349636:b4a0cb67:2af3246a:c7d72c9c name=BILLSSHACK:0
ARRAY /dev/md1 UUID=b52df561:301f043c:b5a7b1ab:9d934878
spares=1
#ARRAY /dev/md/3 metadata=1.2 UUID=751fd632:4baf7d64:ec26eec1:046592bc name=billsshack:3
#ARRAY /dev/md/2 metadata=1.2 UUID=cf2450e4:bc7a9cea:cb5ccda9:6f3ec87c name=billsshack:2
# This file was auto-generated on Wed, 15 Jun 2011 08:57:49 -0400
# by mkconf $Id$
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: reaid problem at reboot
2011-06-21 13:14 reaid problem at reboot william L'Heureux
@ 2011-06-21 20:18 ` Phil Turmel
2011-06-21 20:39 ` william L'Heureux
2011-06-21 20:50 ` william L'Heureux
0 siblings, 2 replies; 11+ messages in thread
From: Phil Turmel @ 2011-06-21 20:18 UTC (permalink / raw)
To: william L'Heureux; +Cc: linux-raid
Hi William,
On 06/21/2011 09:14 AM, william L'Heureux wrote:
> Hi,
>
> before we start, please note that im not a pro at mdadm or lvm, and dont now the full terminology neither the full how it works.
That's OK. We'll help as we can.
> 1 week ago, 1 of my raid6 went wrong. it didnt have any superblock anymore. I know all the drives that belongs to that raid but
> i
> dont know the right order of the drives. I tried a python script with
> assemble and permutation all combination possible. it then checks
> the
> header and do a e2fsck -n to check errors. Everytime i run the script i
> get several and different results on each run. Some say that
> the
> order of the drives doesnt matter, i agree but withotuh superblock its
> another story. I got another raid6 which is fine and on top of those
> 2
> raid6, there is a lvm. if you have any question or output you want to
> see, please tell me and i will provide further informations as soon as i
> can
> if you dont understand me, i apologize, im trying to be better at english everyday.
First, an overview of what working: Please show the output of "lsdrv".
http://github.com/pturmel/lsdrv
Then, some more detail: Please show "mdadm --detail /dev/md0 /dev/md1" and "mdadm --examine /dev/sd[abcdefghijkm]1"
Finally, please show "cat /etc/lvm/backup/*"
Phil
^ permalink raw reply [flat|nested] 11+ messages in thread
* RE: reaid problem at reboot
2011-06-21 20:18 ` Phil Turmel
@ 2011-06-21 20:39 ` william L'Heureux
2011-06-21 20:50 ` william L'Heureux
1 sibling, 0 replies; 11+ messages in thread
From: william L'Heureux @ 2011-06-21 20:39 UTC (permalink / raw)
To: philip; +Cc: linux-raid
> Date: Tue, 21 Jun 2011 16:18:18 -0400
> From: philip@turmel.org
> To: wil_c_will@hotmail.com
> CC: linux-raid@vger.kernel.org
> Subject: Re: reaid problem at reboot
>
> Hi William,
>
> On 06/21/2011 09:14 AM, william L'Heureux wrote:
> > Hi,
> >
> > before we start, please note that im not a pro at mdadm or lvm, and dont now the full terminology neither the full how it works.
>
> That's OK. We'll help as we can.
>
> > 1 week ago, 1 of my raid6 went wrong. it didnt have any superblock anymore. I know all the drives that belongs to that raid but
> > i
> > dont know the right order of the drives. I tried a python script with
> > assemble and permutation all combination possible. it then checks
> > the
> > header and do a e2fsck -n to check errors. Everytime i run the script i
> > get several and different results on each run. Some say that
> > the
> > order of the drives doesnt matter, i agree but withotuh superblock its
> > another story. I got another raid6 which is fine and on top of those
> > 2
> > raid6, there is a lvm. if you have any question or output you want to
> > see, please tell me and i will provide further informations as soon as i
> > can
> > if you dont understand me, i apologize, im trying to be better at english everyday.
>
> First, an overview of what working: Please show the output of "lsdrv".
>
> http://github.com/pturmel/lsdrv
>
> Then, some more detail: Please show "mdadm --detail /dev/md0 /dev/md1" and "mdadm --examine /dev/sd[abcdefghijkm]1"
>
> Finally, please show "cat /etc/lvm/backup/*"
>
> Phil
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
sorry about lsdvr but im getting an error:
./lsdrv
File "./lsdrv", line 26
def __init__(self, **entries):
^
IndentationError: expected an indented block
mdadm --detail /dev/md0 /dev/md1
/dev/md0:
Version : 1.2
Creation Time : Tue Jun 21 11:23:46 2011
Raid Level : raid6
Array Size : 7809854976 (7448.06 GiB 7997.29 GB)
Used Dev Size : 1952463744 (1862.01 GiB 1999.32 GB)
Raid Devices : 6
Total Devices : 6
Persistence : Superblock is persistent
Update Time : Tue Jun 21 11:23:46 2011
State : clean
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 128K
Name : BILLSSHACK:0 (local to host BILLSSHACK)
UUID : 3bcf45ef:da3ede81:09a8d663:5b58832d
Events : 0
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 33 1 active sync /dev/sdc1
2 8 49 2 active sync /dev/sdd1
3 8 113 3 active sync /dev/sdh1
4 8 129 4 active sync /dev/sdi1
5 8 145 5 active sync /dev/sdj1
/dev/md1:
Version : 1.2
Creation Time : Tue Jun 15 09:59:08 2010
Raid Level : raid6
Array Size : 7809855488 (7448.06 GiB 7997.29 GB)
Used Dev Size : 1952463872 (1862.01 GiB 1999.32 GB)
Raid Devices : 6
Total Devices : 6
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Tue Jun 14 20:58:11 2011
State : active
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 128K
Name : billsshack:3
UUID : 751fd632:4baf7d64:ec26eec1:046592bc
Events : 235082
Number Major Minor RaidDevice State
0 8 161 0 active sync /dev/sdk1
1 8 17 1 active sync /dev/sdb1
8 8 193 2 active sync /dev/sdm1
5 8 81 3 active sync /dev/sdf1
6 8 65 4 active sync /dev/sde1
7 8 97 5 active sync /dev/sdg1
mdadm --examine /dev/sd[abcdefghijkm]1
/dev/sda1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 3bcf45ef:da3ede81:09a8d663:5b58832d
Name : BILLSSHACK:0 (local to host BILLSSHACK)
Creation Time : Tue Jun 21 11:23:46 2011
Raid Level : raid6
Raid Devices : 6
Avail Dev Size : 3904927887 (1862.01 GiB 1999.32 GB)
Array Size : 15619709952 (7448.06 GiB 7997.29 GB)
Used Dev Size : 3904927488 (1862.01 GiB 1999.32 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : b6569293:759c2fbe:d52f346f:2ea85306
Update Time : Tue Jun 21 16:35:23 2011
Checksum : 9d90c200 - correct
Events : 2
Layout : left-symmetric
Chunk Size : 128K
Device Role : Active device 0
Array State : AAAAAA ('A' == active, '.' == missing)
/dev/sdb1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 751fd632:4baf7d64:ec26eec1:046592bc
Name : billsshack:3
Creation Time : Tue Jun 15 09:59:08 2010
Raid Level : raid6
Raid Devices : 6
Avail Dev Size : 3904927887 (1862.01 GiB 1999.32 GB)
Array Size : 15619710976 (7448.06 GiB 7997.29 GB)
Used Dev Size : 3904927744 (1862.01 GiB 1999.32 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 0b6a5d24:bff1c401:259c054b:ca7ccc7f
Internal Bitmap : 2 sectors from superblock
Update Time : Tue Jun 21 16:35:23 2011
Checksum : bed09424 - correct
Events : 235082
Layout : left-symmetric
Chunk Size : 128K
Device Role : Active device 1
Array State : AAAAAA ('A' == active, '.' == missing)
/dev/sdc1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 3bcf45ef:da3ede81:09a8d663:5b58832d
Name : BILLSSHACK:0 (local to host BILLSSHACK)
Creation Time : Tue Jun 21 11:23:46 2011
Raid Level : raid6
Raid Devices : 6
Avail Dev Size : 3904927887 (1862.01 GiB 1999.32 GB)
Array Size : 15619709952 (7448.06 GiB 7997.29 GB)
Used Dev Size : 3904927488 (1862.01 GiB 1999.32 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 298ee23a:4968978c:0557d189:1c45ff55
Update Time : Tue Jun 21 16:35:23 2011
Checksum : 7d918966 - correct
Events : 2
Layout : left-symmetric
Chunk Size : 128K
Device Role : Active device 1
Array State : AAAAAA ('A' == active, '.' == missing)
/dev/sdd1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 3bcf45ef:da3ede81:09a8d663:5b58832d
Name : BILLSSHACK:0 (local to host BILLSSHACK)
Creation Time : Tue Jun 21 11:23:46 2011
Raid Level : raid6
Raid Devices : 6
Avail Dev Size : 3904927887 (1862.01 GiB 1999.32 GB)
Array Size : 15619709952 (7448.06 GiB 7997.29 GB)
Used Dev Size : 3904927488 (1862.01 GiB 1999.32 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 2b8ef61c:d3d59200:659d4367:573a992d
Update Time : Tue Jun 21 16:35:23 2011
Checksum : 88ad328d - correct
Events : 2
Layout : left-symmetric
Chunk Size : 128K
Device Role : Active device 2
Array State : AAAAAA ('A' == active, '.' == missing)
/dev/sde1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 751fd632:4baf7d64:ec26eec1:046592bc
Name : billsshack:3
Creation Time : Tue Jun 15 09:59:08 2010
Raid Level : raid6
Raid Devices : 6
Avail Dev Size : 3904927887 (1862.01 GiB 1999.32 GB)
Array Size : 15619710976 (7448.06 GiB 7997.29 GB)
Used Dev Size : 3904927744 (1862.01 GiB 1999.32 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 1de7b73d:73ff1f80:c0903159:87fc8287
Internal Bitmap : 2 sectors from superblock
Update Time : Tue Jun 21 16:35:23 2011
Checksum : 6c689348 - correct
Events : 235082
Layout : left-symmetric
Chunk Size : 128K
Device Role : Active device 4
Array State : AAAAAA ('A' == active, '.' == missing)
/dev/sdf1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 751fd632:4baf7d64:ec26eec1:046592bc
Name : billsshack:3
Creation Time : Tue Jun 15 09:59:08 2010
Raid Level : raid6
Raid Devices : 6
Avail Dev Size : 3904927887 (1862.01 GiB 1999.32 GB)
Array Size : 15619710976 (7448.06 GiB 7997.29 GB)
Used Dev Size : 3904927744 (1862.01 GiB 1999.32 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 3247b37b:aa741693:0d58ccb6:b814a5d1
Internal Bitmap : 2 sectors from superblock
Update Time : Tue Jun 21 16:35:23 2011
Checksum : 65174812 - correct
Events : 235082
Layout : left-symmetric
Chunk Size : 128K
Device Role : Active device 3
Array State : AAAAAA ('A' == active, '.' == missing)
/dev/sdg1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 751fd632:4baf7d64:ec26eec1:046592bc
Name : billsshack:3
Creation Time : Tue Jun 15 09:59:08 2010
Raid Level : raid6
Raid Devices : 6
Avail Dev Size : 3904927887 (1862.01 GiB 1999.32 GB)
Array Size : 15619710976 (7448.06 GiB 7997.29 GB)
Used Dev Size : 3904927744 (1862.01 GiB 1999.32 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 8483b8f7:8c3302c6:9403a3e9:3c3a094c
Internal Bitmap : 2 sectors from superblock
Update Time : Tue Jun 21 16:35:23 2011
Checksum : c1431453 - correct
Events : 235082
Layout : left-symmetric
Chunk Size : 128K
Device Role : Active device 5
Array State : AAAAAA ('A' == active, '.' == missing)
/dev/sdh1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 3bcf45ef:da3ede81:09a8d663:5b58832d
Name : BILLSSHACK:0 (local to host BILLSSHACK)
Creation Time : Tue Jun 21 11:23:46 2011
Raid Level : raid6
Raid Devices : 6
Avail Dev Size : 3904927887 (1862.01 GiB 1999.32 GB)
Array Size : 15619709952 (7448.06 GiB 7997.29 GB)
Used Dev Size : 3904927488 (1862.01 GiB 1999.32 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 1abe5069:570c484e:2199572d:cd0d641f
Update Time : Tue Jun 21 16:35:23 2011
Checksum : da9b6833 - correct
Events : 2
Layout : left-symmetric
Chunk Size : 128K
Device Role : Active device 3
Array State : AAAAAA ('A' == active, '.' == missing)
/dev/sdi1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 3bcf45ef:da3ede81:09a8d663:5b58832d
Name : BILLSSHACK:0 (local to host BILLSSHACK)
Creation Time : Tue Jun 21 11:23:46 2011
Raid Level : raid6
Raid Devices : 6
Avail Dev Size : 3904927887 (1862.01 GiB 1999.32 GB)
Array Size : 15619709952 (7448.06 GiB 7997.29 GB)
Used Dev Size : 3904927488 (1862.01 GiB 1999.32 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 083fa0cc:798c09bb:febe5adf:f99eb7b8
Update Time : Tue Jun 21 16:35:23 2011
Checksum : f603204f - correct
Events : 2
Layout : left-symmetric
Chunk Size : 128K
Device Role : Active device 4
Array State : AAAAAA ('A' == active, '.' == missing)
/dev/sdj1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 3bcf45ef:da3ede81:09a8d663:5b58832d
Name : BILLSSHACK:0 (local to host BILLSSHACK)
Creation Time : Tue Jun 21 11:23:46 2011
Raid Level : raid6
Raid Devices : 6
Avail Dev Size : 3904927887 (1862.01 GiB 1999.32 GB)
Array Size : 15619709952 (7448.06 GiB 7997.29 GB)
Used Dev Size : 3904927488 (1862.01 GiB 1999.32 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : b91a18a4:8aba8cf8:bf046c8f:f683cf16
Update Time : Tue Jun 21 16:35:23 2011
Checksum : 192754d0 - correct
Events : 2
Layout : left-symmetric
Chunk Size : 128K
Device Role : Active device 5
Array State : AAAAAA ('A' == active, '.' == missing)
/dev/sdk1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 751fd632:4baf7d64:ec26eec1:046592bc
Name : billsshack:3
Creation Time : Tue Jun 15 09:59:08 2010
Raid Level : raid6
Raid Devices : 6
Avail Dev Size : 3904927887 (1862.01 GiB 1999.32 GB)
Array Size : 15619710976 (7448.06 GiB 7997.29 GB)
Used Dev Size : 3904927744 (1862.01 GiB 1999.32 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 77b91aac:68eed1cc:e2404d60:f5962abc
Internal Bitmap : 2 sectors from superblock
Update Time : Tue Jun 21 16:35:23 2011
Checksum : 63409f22 - correct
Events : 235082
Layout : left-symmetric
Chunk Size : 128K
Device Role : Active device 0
Array State : AAAAAA ('A' == active, '.' == missing)
/dev/sdm1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 751fd632:4baf7d64:ec26eec1:046592bc
Name : billsshack:3
Creation Time : Tue Jun 15 09:59:08 2010
Raid Level : raid6
Raid Devices : 6
Avail Dev Size : 3904927887 (1862.01 GiB 1999.32 GB)
Array Size : 15619710976 (7448.06 GiB 7997.29 GB)
Used Dev Size : 3904927744 (1862.01 GiB 1999.32 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 9b5a6d24:38202032:c40fc666:1f2570d7
Internal Bitmap : 2 sectors from superblock
Update Time : Tue Jun 21 16:35:23 2011
Checksum : 629fcf29 - correct
Events : 235082
Layout : left-symmetric
Chunk Size : 128K
Device Role : Active device 2
Array State : AAAAAA ('A' == active, '.' == missing)
cat /etc/lvm/backup/*
# Generated by LVM2 version 2.02.66(2) (2010-05-20): Sat May 21 09:40:02 2011
contents = "Text Format Volume Group"
version = 1
description = "Created *after* executing 'vgcfgbackup'"
creation_host = "BILLSSHACK" # Linux BILLSSHACK 2.6.35-28-generic-pae #49-Ubuntu SMP Tue Mar 1 14:58:06 UTC 2011 i686
creation_time = 1305985202 # Sat May 21 09:40:02 2011
vgDeb {
id = "a6nM6t-LPKY-gMdi-9YP4-t2tv-Hqjv-aMgmEa"
seqno = 10
status = ["RESIZEABLE", "READ", "WRITE"]
flags = []
extent_size = 8192 # 4 Megabytes
max_lv = 0
max_pv = 0
physical_volumes {
pv0 {
id = "0ecAby-vT4R-IHpt-5kEK-v1W0-u6AL-bUcFDG"
device = "/dev/sdl2" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 77883120 # 37.1376 Gigabytes
pe_start = 384
pe_count = 9507 # 37.1367 Gigabytes
}
}
logical_volumes {
swap {
id = "ywPLVh-84xx-UvPV-fUaI-PPgU-ZiTw-j6rs07"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
segment_count = 1
segment1 {
start_extent = 0
extent_count = 512 # 2 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 0
]
}
}
root {
id = "x0wMox-eWu2-EmDh-V0Ix-47Zl-iqVc-fTYjVB"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
segment_count = 1
segment1 {
start_extent = 0
extent_count = 2560 # 10 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 512
]
}
}
rootu {
id = "IFNTXp-4DBN-jjtc-PrCE-Mxs5-68FH-vY5QWK"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
segment_count = 1
segment1 {
start_extent = 0
extent_count = 2560 # 10 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 5632
]
}
}
}
}
# Generated by LVM2 version 2.02.66(2) (2010-05-20): Sat May 21 09:40:02 2011
contents = "Text Format Volume Group"
version = 1
description = "Created *after* executing 'vgcfgbackup'"
creation_host = "BILLSSHACK" # Linux BILLSSHACK 2.6.35-28-generic-pae #49-Ubuntu SMP Tue Mar 1 14:58:06 UTC 2011 i686
creation_time = 1305985202 # Sat May 21 09:40:02 2011
vgRAID60 {
id = "RUodpI-jXin-DEjs-fS3R-4hjA-hQ4l-41dAaJ"
seqno = 58
status = ["RESIZEABLE", "READ", "WRITE"]
flags = []
extent_size = 8192 # 4 Megabytes
max_lv = 0
max_pv = 0
physical_volumes {
pv0 {
id = "V5c1yI-XD3D-FXlB-VDYd-tUfo-eIUC-hXQfVW"
device = "/dev/md0" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 15619710464 # 7.27349 Terabytes
pe_start = 512
pe_count = 1906702 # 7.27349 Terabytes
}
pv1 {
id = "XRg0sr-C21F-4v2f-9FzZ-JSQr-JZe2-Tbexqf"
device = "/dev/md1" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 15619710464 # 7.27349 Terabytes
pe_start = 512
pe_count = 1906702 # 7.27349 Terabytes
}
}
logical_volumes {
data0 {
id = "Odw7bi-10cB-8SSX-Y4BL-wqs6-4qWT-3AHaYL"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
segment_count = 1
segment1 {
start_extent = 0
extent_count = 3670016 # 14 Terabytes
type = "striped"
stripe_count = 2
stripe_size = 128 # 64 Kilobytes
stripes = [
"pv0", 0,
"pv1", 0
]
}
}
}
}
Again, i am not a raid pro. those commands i can understand, if anywhere i make a mistake about a mdadm concept, please tell me.
Nevertheless, thanks for you help
William
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 11+ messages in thread
* RE: reaid problem at reboot
2011-06-21 20:18 ` Phil Turmel
2011-06-21 20:39 ` william L'Heureux
@ 2011-06-21 20:50 ` william L'Heureux
2011-06-21 21:58 ` Phil Turmel
1 sibling, 1 reply; 11+ messages in thread
From: william L'Heureux @ 2011-06-21 20:50 UTC (permalink / raw)
To: philip; +Cc: linux-raid
----------------------------------------
> Date: Tue, 21 Jun 2011 16:18:18 -0400
> From: philip@turmel.org
> To: wil_c_will@hotmail.com
> CC: linux-raid@vger.kernel.org
> Subject: Re: reaid problem at reboot
>
> Hi William,
>
> On 06/21/2011 09:14 AM, william L'Heureux wrote:
> > Hi,
> >
> > before we start, please note that im not a pro at mdadm or lvm, and dont now the full terminology neither the full how it works.
>
> That's OK. We'll help as we can.
>
> > 1 week ago, 1 of my raid6 went wrong. it didnt have any superblock anymore. I know all the drives that belongs to that raid but
> > i
> > dont know the right order of the drives. I tried a python script with
> > assemble and permutation all combination possible. it then checks
> > the
> > header and do a e2fsck -n to check errors. Everytime i run the script i
> > get several and different results on each run. Some say that
> > the
> > order of the drives doesnt matter, i agree but withotuh superblock its
> > another story. I got another raid6 which is fine and on top of those
> > 2
> > raid6, there is a lvm. if you have any question or output you want to
> > see, please tell me and i will provide further informations as soon as i
> > can
> > if you dont understand me, i apologize, im trying to be better at english everyday.
>
> First, an overview of what working: Please show the output of "lsdrv".
>
> http://github.com/pturmel/lsdrv
>
> Then, some more detail: Please show "mdadm --detail /dev/md0 /dev/md1" and "mdadm --examine /dev/sd[abcdefghijkm]1"
>
> Finally, please show "cat /etc/lvm/backup/*"
>
> Phil
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Date: Tue, 21 Jun 2011 16:18:18 -0400
> From: philip@turmel.org
> To: wil_c_will@hotmail.com
> CC: linux-raid@vger.kernel.org
> Subject: Re: reaid problem at reboot
>
> Hi William,
>
> On 06/21/2011 09:14 AM, william L'Heureux wrote:
> > Hi,
> >
> > before we start, please note that im not a pro at mdadm or lvm, and dont now the full terminology neither the full how it works.
>
> That's OK. We'll help as we can.
>
> > 1 week ago, 1 of my raid6 went wrong. it didnt have any superblock anymore. I know all the drives that belongs to that raid but
> > i
> > dont know the right order of the drives. I tried a python script with
> > assemble and permutation all combination possible. it then checks
> > the
> > header and do a e2fsck -n to check errors. Everytime i run the script i
> > get several and different results on each run. Some say that
> > the
> > order of the drives doesnt matter, i agree but withotuh superblock its
> > another story. I got another raid6 which is fine and on top of those
> > 2
> > raid6, there is a lvm. if you have any question or output you want to
> > see, please tell me and i will provide further informations as soon as i
> > can
> > if you dont understand me, i apologize, im trying to be better at english everyday.
>
> First, an overview of what working: Please show the output of "lsdrv".
>
> http://github.com/pturmel/lsdrv
>
> Then, some more detail: Please show "mdadm --detail /dev/md0 /dev/md1" and "mdadm --examine /dev/sd[abcdefghijkm]1"
>
> Finally, please show "cat /etc/lvm/backup/*"
>
> Phil
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
./lsdrv
PCI [sata_mv] 02:01.0 SCSI storage controller: Marvell Technology Group Ltd. MV8 8SX6081 8-port SATA II PCI-X Controller (rev 09)
ââscsi 0:0:0:0 ATA Hitachi HDS72202 {JK1171YAGA977S}
â ââsda: [8:0] Partitioned (gpt) 1.82t
â ââsda1: [8:1] MD raid6 (6) 1.82t inactive 'BILLSSHACK:0' {3bcf45ef-da3e-d e81-09a8-d6635b58832d}
â ââsda2: [8:2] Empty/Unknown 1.00g
ââscsi 1:0:0:0 ATA WL2000GSA1672 {WOL240094387}
â ââsdb: [8:16] Partitioned (gpt) 1.82t
â ââsdb1: [8:17] MD raid6 (6) 1.82t inactive 'billsshack:3' {751fd632-4baf- 7d64-ec26-eec1046592bc}
â ââsdb2: [8:18] Empty/Unknown 1.00g
ââscsi 2:x:x:x [Empty]
ââscsi 3:x:x:x [Empty]
ââscsi 4:0:0:0 ATA WL2000GSA1672 {WOL240094356}
â ââsdc: [8:32] Partitioned (gpt) 1.82t
â ââsdc1: [8:33] MD raid6 (6) 1.82t inactive 'BILLSSHACK:0' {3bcf45ef-da3e- de81-09a8-d6635b58832d}
â ââsdc2: [8:34] Empty/Unknown 1.00g
ââscsi 5:0:0:0 ATA WL2000GSA3272C {WOL240146377}
â ââsdd: [8:48] Partitioned (gpt) 1.82t
â ââsdd1: [8:49] MD raid6 (6) 1.82t inactive 'BILLSSHACK:0' {3bcf45ef-da3e- de81-09a8-d6635b58832d}
â ââsdd2: [8:50] Empty/Unknown 1.00g
ââscsi 6:x:x:x [Empty]
ââscsi 7:x:x:x [Empty]
PCI [sata_mv] 03:02.0 SCSI storage controller: Marvell Technology Group Ltd. MV8 8SX6081 8-port SATA II PCI-X Controller (rev 09)
ââscsi 8:0:0:0 ATA WL2000GSA3272 {WOL240102099}
â ââsde: [8:64] Partitioned (gpt) 1.82t
â ââsde1: [8:65] MD raid6 (6) 1.82t inactive 'billsshack:3' {751fd632-4baf- 7d64-ec26-eec1046592bc}
â ââsde2: [8:66] Empty/Unknown 1.00g
ââscsi 9:x:x:x [Empty]
ââscsi 10:0:0:0 ATA Hitachi HDS72202 {JK1171YAGDDUGS}
â ââsdf: [8:80] Partitioned (gpt) 1.82t
â ââsdf1: [8:81] MD raid6 (6) 1.82t inactive 'billsshack:3' {751fd632-4baf- 7d64-ec26-eec1046592bc}
â ââsdf2: [8:82] Empty/Unknown 1.00g
ââscsi 11:0:0:0 ATA ST32000542AS {6XW1B8WP}
â ââsdg: [8:96] Partitioned (gpt) 1.82t
â ââsdg1: [8:97] MD raid6 (6) 1.82t inactive 'billsshack:3' {751fd632-4baf- 7d64-ec26-eec1046592bc}
â ââsdg2: [8:98] Empty/Unknown 1.00g
ââscsi 12:0:0:0 ATA WL2000GSA3272 {WOL240102098}
â ââsdh: [8:112] Partitioned (gpt) 1.82t
â ââsdh1: [8:113] MD raid6 (6) 1.82t inactive 'BILLSSHACK:0' {3bcf45ef-da3e -de81-09a8-d6635b58832d}
â ââsdh2: [8:114] Empty/Unknown 1.00g
ââscsi 13:0:0:0 ATA WDC WD20EADS-00S {WD-WCAVY3323665}
â ââsdi: [8:128] Partitioned (gpt) 1.82t
â ââsdi1: [8:129] MD raid6 (6) 1.82t inactive 'BILLSSHACK:0' {3bcf45ef-da3e -de81-09a8-d6635b58832d}
â ââsdi2: [8:130] Empty/Unknown 1.00g
ââscsi 14:0:0:0 ATA WDC WD20EADS-00S {WD-WCAVY3356471}
â ââsdj: [8:144] Partitioned (gpt) 1.82t
â ââsdj1: [8:145] MD raid6 (6) 1.82t inactive 'BILLSSHACK:0' {3bcf45ef-da3e -de81-09a8-d6635b58832d}
â ââsdj2: [8:146] Empty/Unknown 1.00g
ââscsi 15:x:x:x [Empty]
PCI [ahci] 00:1f.2 SATA controller: Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH ) 6 port SATA AHCI Controller (rev 02)
ââscsi 16:x:x:x [Empty]
ââscsi 17:x:x:x [Empty]
ââscsi 18:0:0:0 ATA WDC WD20EADS-00S {WD-WCAVY3286477}
â ââsdk: [8:160] Partitioned (gpt) 1.82t
â ââsdk1: [8:161] MD raid6 (6) 1.82t inactive 'billsshack:3' {751fd632-4baf -7d64-ec26-eec1046592bc}
â ââsdk2: [8:162] Empty/Unknown 1.00g
ââscsi 19:0:0:0 ATA WDC WD400BD-60LT {WD-WMAME1857443}
â ââsdl: [8:176] Partitioned (dos) 37.27g
â ââsdl1: [8:177] Partitioned (dos) 133.32m {617e6b03-6cc1-4974-8acc-19a742 cd06f4}
â â ââMounted as /dev/sdl1 @ /boot
â ââsdl2: [8:178] PV LVM2_member 22.00g/37.14g VG vgDeb 37.14g {0ecAby-vT4R -IHpt-5kEK-v1W0-u6AL-bUcFDG}
â ââVolume Group vgDeb (sdl2) 15.14g free {a6nM6t-LPKY-gMdi-9YP4-t2tv-H qjv-aMgmEa}
â ââdm-1: [251:1] LV 'root' (reiserfs) 10.00g {f497b9bd-e24f-4d20-9f6 b-282b6493f341}
â â ââMounted as /dev/mapper/vgDeb-root @ /mnt/debian
â ââdm-2: [251:2] LV 'rootu' (reiserfs) 10.00g {bc0f350c-4af7-4192-b1 ea-4528101db28e}
â â ââMounted as /dev/mapper/vgDeb-rootu @ /
â ââdm-0: [251:0] LV 'swap' (swap) 2.00g {13264e60-a1d1-4bf3-af9f-edd 948212a63}
ââscsi 20:0:0:0 ATA WDC WD20EADS-00S {WD-WCAVY3271728}
â ââsdm: [8:192] MD raid6 (4) 1.82t inactive {b52df561-301f-043c-b5a7-b1ab9d93 4878}
â ââsdm1: [8:193] MD raid6 (6) 1.82t inactive 'billsshack:3' {751fd632-4baf -7d64-ec26-eec1046592bc}
â ââsdm2: [8:194] MD raid6 (6) 1.00g inactive 'billsshack:2' {cf2450e4-bc7a -9cea-cb5c-cda96f3ec87c}
ââscsi 21:x:x:x [Empty]
Other Block Devices
ââram0: [1:0] Empty/Unknown 64.00m
ââram1: [1:1] Empty/Unknown 64.00m
ââram2: [1:2] Empty/Unknown 64.00m
ââram3: [1:3] Empty/Unknown 64.00m
ââram4: [1:4] Empty/Unknown 64.00m
ââram5: [1:5] Empty/Unknown 64.00m
ââram6: [1:6] Empty/Unknown 64.00m
ââram7: [1:7] Empty/Unknown 64.00m
ââram8: [1:8] Empty/Unknown 64.00m
ââram9: [1:9] Empty/Unknown 64.00m
ââram10: [1:10] Empty/Unknown 64.00m
ââram11: [1:11] Empty/Unknown 64.00m
ââram12: [1:12] Empty/Unknown 64.00m
ââram13: [1:13] Empty/Unknown 64.00m
ââram14: [1:14] Empty/Unknown 64.00m
ââram15: [1:15] Empty/Unknown 64.00m
mdadm --detail /dev/md0 /dev/md1
/dev/md0:
Version : 1.2
Creation Time : Tue Jun 21 11:23:46 2011
Raid Level : raid6
Array Size : 7809854976 (7448.06 GiB 7997.29 GB)
Used Dev Size : 1952463744 (1862.01 GiB 1999.32 GB)
Raid Devices : 6
Total Devices : 6
Persistence : Superblock is persistent
Update Time : Tue Jun 21 11:23:46 2011
State : clean
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 128K
Name : BILLSSHACK:0 (local to host BILLSSHACK)
UUID : 3bcf45ef:da3ede81:09a8d663:5b58832d
Events : 0
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 33 1 active sync /dev/sdc1
2 8 49 2 active sync /dev/sdd1
3 8 113 3 active sync /dev/sdh1
4 8 129 4 active sync /dev/sdi1
5 8 145 5 active sync /dev/sdj1
/dev/md1:
Version : 1.2
Creation Time : Tue Jun 15 09:59:08 2010
Raid Level : raid6
Array Size : 7809855488 (7448.06 GiB 7997.29 GB)
Used Dev Size : 1952463872 (1862.01 GiB 1999.32 GB)
Raid Devices : 6
Total Devices : 6
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Tue Jun 14 20:58:11 2011
State : active
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 128K
Name : billsshack:3
UUID : 751fd632:4baf7d64:ec26eec1:046592bc
Events : 235082
Number Major Minor RaidDevice State
0 8 161 0 active sync /dev/sdk1
1 8 17 1 active sync /dev/sdb1
8 8 193 2 active sync /dev/sdm1
5 8 81 3 active sync /dev/sdf1
6 8 65 4 active sync /dev/sde1
7 8 97 5 active sync /dev/sdg1
mdadm --examine /dev/sd[abcdefghijkm]1
/dev/sda1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 3bcf45ef:da3ede81:09a8d663:5b58832d
Name : BILLSSHACK:0 (local to host BILLSSHACK)
Creation Time : Tue Jun 21 11:23:46 2011
Raid Level : raid6
Raid Devices : 6
Avail Dev Size : 3904927887 (1862.01 GiB 1999.32 GB)
Array Size : 15619709952 (7448.06 GiB 7997.29 GB)
Used Dev Size : 3904927488 (1862.01 GiB 1999.32 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : b6569293:759c2fbe:d52f346f:2ea85306
Update Time : Tue Jun 21 16:35:23 2011
Checksum : 9d90c200 - correct
Events : 2
Layout : left-symmetric
Chunk Size : 128K
Device Role : Active device 0
Array State : AAAAAA ('A' == active, '.' == missing)
/dev/sdb1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 751fd632:4baf7d64:ec26eec1:046592bc
Name : billsshack:3
Creation Time : Tue Jun 15 09:59:08 2010
Raid Level : raid6
Raid Devices : 6
Avail Dev Size : 3904927887 (1862.01 GiB 1999.32 GB)
Array Size : 15619710976 (7448.06 GiB 7997.29 GB)
Used Dev Size : 3904927744 (1862.01 GiB 1999.32 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 0b6a5d24:bff1c401:259c054b:ca7ccc7f
Internal Bitmap : 2 sectors from superblock
Update Time : Tue Jun 21 16:35:23 2011
Checksum : bed09424 - correct
Events : 235082
Layout : left-symmetric
Chunk Size : 128K
Device Role : Active device 1
Array State : AAAAAA ('A' == active, '.' == missing)
/dev/sdc1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 3bcf45ef:da3ede81:09a8d663:5b58832d
Name : BILLSSHACK:0 (local to host BILLSSHACK)
Creation Time : Tue Jun 21 11:23:46 2011
Raid Level : raid6
Raid Devices : 6
Avail Dev Size : 3904927887 (1862.01 GiB 1999.32 GB)
Array Size : 15619709952 (7448.06 GiB 7997.29 GB)
Used Dev Size : 3904927488 (1862.01 GiB 1999.32 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 298ee23a:4968978c:0557d189:1c45ff55
Update Time : Tue Jun 21 16:35:23 2011
Checksum : 7d918966 - correct
Events : 2
Layout : left-symmetric
Chunk Size : 128K
Device Role : Active device 1
Array State : AAAAAA ('A' == active, '.' == missing)
/dev/sdd1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 3bcf45ef:da3ede81:09a8d663:5b58832d
Name : BILLSSHACK:0 (local to host BILLSSHACK)
Creation Time : Tue Jun 21 11:23:46 2011
Raid Level : raid6
Raid Devices : 6
Avail Dev Size : 3904927887 (1862.01 GiB 1999.32 GB)
Array Size : 15619709952 (7448.06 GiB 7997.29 GB)
Used Dev Size : 3904927488 (1862.01 GiB 1999.32 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 2b8ef61c:d3d59200:659d4367:573a992d
Update Time : Tue Jun 21 16:35:23 2011
Checksum : 88ad328d - correct
Events : 2
Layout : left-symmetric
Chunk Size : 128K
Device Role : Active device 2
Array State : AAAAAA ('A' == active, '.' == missing)
/dev/sde1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 751fd632:4baf7d64:ec26eec1:046592bc
Name : billsshack:3
Creation Time : Tue Jun 15 09:59:08 2010
Raid Level : raid6
Raid Devices : 6
Avail Dev Size : 3904927887 (1862.01 GiB 1999.32 GB)
Array Size : 15619710976 (7448.06 GiB 7997.29 GB)
Used Dev Size : 3904927744 (1862.01 GiB 1999.32 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 1de7b73d:73ff1f80:c0903159:87fc8287
Internal Bitmap : 2 sectors from superblock
Update Time : Tue Jun 21 16:35:23 2011
Checksum : 6c689348 - correct
Events : 235082
Layout : left-symmetric
Chunk Size : 128K
Device Role : Active device 4
Array State : AAAAAA ('A' == active, '.' == missing)
/dev/sdf1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 751fd632:4baf7d64:ec26eec1:046592bc
Name : billsshack:3
Creation Time : Tue Jun 15 09:59:08 2010
Raid Level : raid6
Raid Devices : 6
Avail Dev Size : 3904927887 (1862.01 GiB 1999.32 GB)
Array Size : 15619710976 (7448.06 GiB 7997.29 GB)
Used Dev Size : 3904927744 (1862.01 GiB 1999.32 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 3247b37b:aa741693:0d58ccb6:b814a5d1
Internal Bitmap : 2 sectors from superblock
Update Time : Tue Jun 21 16:35:23 2011
Checksum : 65174812 - correct
Events : 235082
Layout : left-symmetric
Chunk Size : 128K
Device Role : Active device 3
Array State : AAAAAA ('A' == active, '.' == missing)
/dev/sdg1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 751fd632:4baf7d64:ec26eec1:046592bc
Name : billsshack:3
Creation Time : Tue Jun 15 09:59:08 2010
Raid Level : raid6
Raid Devices : 6
Avail Dev Size : 3904927887 (1862.01 GiB 1999.32 GB)
Array Size : 15619710976 (7448.06 GiB 7997.29 GB)
Used Dev Size : 3904927744 (1862.01 GiB 1999.32 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 8483b8f7:8c3302c6:9403a3e9:3c3a094c
Internal Bitmap : 2 sectors from superblock
Update Time : Tue Jun 21 16:35:23 2011
Checksum : c1431453 - correct
Events : 235082
Layout : left-symmetric
Chunk Size : 128K
Device Role : Active device 5
Array State : AAAAAA ('A' == active, '.' == missing)
/dev/sdh1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 3bcf45ef:da3ede81:09a8d663:5b58832d
Name : BILLSSHACK:0 (local to host BILLSSHACK)
Creation Time : Tue Jun 21 11:23:46 2011
Raid Level : raid6
Raid Devices : 6
Avail Dev Size : 3904927887 (1862.01 GiB 1999.32 GB)
Array Size : 15619709952 (7448.06 GiB 7997.29 GB)
Used Dev Size : 3904927488 (1862.01 GiB 1999.32 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 1abe5069:570c484e:2199572d:cd0d641f
Update Time : Tue Jun 21 16:35:23 2011
Checksum : da9b6833 - correct
Events : 2
Layout : left-symmetric
Chunk Size : 128K
Device Role : Active device 3
Array State : AAAAAA ('A' == active, '.' == missing)
/dev/sdi1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 3bcf45ef:da3ede81:09a8d663:5b58832d
Name : BILLSSHACK:0 (local to host BILLSSHACK)
Creation Time : Tue Jun 21 11:23:46 2011
Raid Level : raid6
Raid Devices : 6
Avail Dev Size : 3904927887 (1862.01 GiB 1999.32 GB)
Array Size : 15619709952 (7448.06 GiB 7997.29 GB)
Used Dev Size : 3904927488 (1862.01 GiB 1999.32 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 083fa0cc:798c09bb:febe5adf:f99eb7b8
Update Time : Tue Jun 21 16:35:23 2011
Checksum : f603204f - correct
Events : 2
Layout : left-symmetric
Chunk Size : 128K
Device Role : Active device 4
Array State : AAAAAA ('A' == active, '.' == missing)
/dev/sdj1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 3bcf45ef:da3ede81:09a8d663:5b58832d
Name : BILLSSHACK:0 (local to host BILLSSHACK)
Creation Time : Tue Jun 21 11:23:46 2011
Raid Level : raid6
Raid Devices : 6
Avail Dev Size : 3904927887 (1862.01 GiB 1999.32 GB)
Array Size : 15619709952 (7448.06 GiB 7997.29 GB)
Used Dev Size : 3904927488 (1862.01 GiB 1999.32 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : b91a18a4:8aba8cf8:bf046c8f:f683cf16
Update Time : Tue Jun 21 16:35:23 2011
Checksum : 192754d0 - correct
Events : 2
Layout : left-symmetric
Chunk Size : 128K
Device Role : Active device 5
Array State : AAAAAA ('A' == active, '.' == missing)
/dev/sdk1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 751fd632:4baf7d64:ec26eec1:046592bc
Name : billsshack:3
Creation Time : Tue Jun 15 09:59:08 2010
Raid Level : raid6
Raid Devices : 6
Avail Dev Size : 3904927887 (1862.01 GiB 1999.32 GB)
Array Size : 15619710976 (7448.06 GiB 7997.29 GB)
Used Dev Size : 3904927744 (1862.01 GiB 1999.32 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 77b91aac:68eed1cc:e2404d60:f5962abc
Internal Bitmap : 2 sectors from superblock
Update Time : Tue Jun 21 16:35:23 2011
Checksum : 63409f22 - correct
Events : 235082
Layout : left-symmetric
Chunk Size : 128K
Device Role : Active device 0
Array State : AAAAAA ('A' == active, '.' == missing)
/dev/sdm1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 751fd632:4baf7d64:ec26eec1:046592bc
Name : billsshack:3
Creation Time : Tue Jun 15 09:59:08 2010
Raid Level : raid6
Raid Devices : 6
Avail Dev Size : 3904927887 (1862.01 GiB 1999.32 GB)
Array Size : 15619710976 (7448.06 GiB 7997.29 GB)
Used Dev Size : 3904927744 (1862.01 GiB 1999.32 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 9b5a6d24:38202032:c40fc666:1f2570d7
Internal Bitmap : 2 sectors from superblock
Update Time : Tue Jun 21 16:35:23 2011
Checksum : 629fcf29 - correct
Events : 235082
Layout : left-symmetric
Chunk Size : 128K
Device Role : Active device 2
Array State : AAAAAA ('A' == active, '.' == missing)
cat /etc/lvm/backup/*
# Generated by LVM2 version 2.02.66(2) (2010-05-20): Sat May 21 09:40:02 2011
contents = "Text Format Volume Group"
version = 1
description = "Created *after* executing 'vgcfgbackup'"
creation_host = "BILLSSHACK" # Linux BILLSSHACK 2.6.35-28-generic-pae #49-Ubuntu SMP Tue Mar 1 14:58:06 UTC 2011 i686
creation_time = 1305985202 # Sat May 21 09:40:02 2011
vgDeb {
id = "a6nM6t-LPKY-gMdi-9YP4-t2tv-Hqjv-aMgmEa"
seqno = 10
status = ["RESIZEABLE", "READ", "WRITE"]
flags = []
extent_size = 8192 # 4 Megabytes
max_lv = 0
max_pv = 0
physical_volumes {
pv0 {
id = "0ecAby-vT4R-IHpt-5kEK-v1W0-u6AL-bUcFDG"
device = "/dev/sdl2" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 77883120 # 37.1376 Gigabytes
pe_start = 384
pe_count = 9507 # 37.1367 Gigabytes
}
}
logical_volumes {
swap {
id = "ywPLVh-84xx-UvPV-fUaI-PPgU-ZiTw-j6rs07"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
segment_count = 1
segment1 {
start_extent = 0
extent_count = 512 # 2 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 0
]
}
}
root {
id = "x0wMox-eWu2-EmDh-V0Ix-47Zl-iqVc-fTYjVB"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
segment_count = 1
segment1 {
start_extent = 0
extent_count = 2560 # 10 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 512
]
}
}
rootu {
id = "IFNTXp-4DBN-jjtc-PrCE-Mxs5-68FH-vY5QWK"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
segment_count = 1
segment1 {
start_extent = 0
extent_count = 2560 # 10 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 5632
]
}
}
}
}
# Generated by LVM2 version 2.02.66(2) (2010-05-20): Sat May 21 09:40:02 2011
contents = "Text Format Volume Group"
version = 1
description = "Created *after* executing 'vgcfgbackup'"
creation_host = "BILLSSHACK" # Linux BILLSSHACK 2.6.35-28-generic-pae #49-Ubuntu SMP Tue Mar 1 14:58:06 UTC 2011 i686
creation_time = 1305985202 # Sat May 21 09:40:02 2011
vgRAID60 {
id = "RUodpI-jXin-DEjs-fS3R-4hjA-hQ4l-41dAaJ"
seqno = 58
status = ["RESIZEABLE", "READ", "WRITE"]
flags = []
extent_size = 8192 # 4 Megabytes
max_lv = 0
max_pv = 0
physical_volumes {
pv0 {
id = "V5c1yI-XD3D-FXlB-VDYd-tUfo-eIUC-hXQfVW"
device = "/dev/md0" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 15619710464 # 7.27349 Terabytes
pe_start = 512
pe_count = 1906702 # 7.27349 Terabytes
}
pv1 {
id = "XRg0sr-C21F-4v2f-9FzZ-JSQr-JZe2-Tbexqf"
device = "/dev/md1" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 15619710464 # 7.27349 Terabytes
pe_start = 512
pe_count = 1906702 # 7.27349 Terabytes
}
}
logical_volumes {
data0 {
id = "Odw7bi-10cB-8SSX-Y4BL-wqs6-4qWT-3AHaYL"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
segment_count = 1
segment1 {
start_extent = 0
extent_count = 3670016 # 14 Terabytes
type = "striped"
stripe_count = 2
stripe_size = 128 # 64 Kilobytes
stripes = [
"pv0", 0,
"pv1", 0
]
}
}
}
}
Again, i am not a raid pro. those commands i can understand, if anywhere i make a mistake about a mdadm concept, please tell me.
Nevertheless, thanks for you help
William
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: reaid problem at reboot
2011-06-21 20:50 ` william L'Heureux
@ 2011-06-21 21:58 ` Phil Turmel
2011-06-22 1:00 ` william L'Heureux
0 siblings, 1 reply; 11+ messages in thread
From: Phil Turmel @ 2011-06-21 21:58 UTC (permalink / raw)
To: william L'Heureux; +Cc: linux-raid
Hi William,
On 06/21/2011 04:50 PM, william L'Heureux wrote:
>
> ./lsdrv
> PCI [sata_mv] 02:01.0 SCSI storage controller: Marvell Technology Group Ltd. MV8 8SX6081 8-port SATA II PCI-X Controller (rev 09)
> ââscsi 0:0:0:0 ATA Hitachi HDS72202 {JK1171YAGA977S}
> â ââsda: [8:0] Partitioned (gpt) 1.82t
> â ââsda1: [8:1] MD raid6 (6) 1.82t inactive 'BILLSSHACK:0' {3bcf45ef-da3e-d e81-09a8-d6635b58832d}
> â ââsda2: [8:2] Empty/Unknown 1.00g
> ââscsi 1:0:0:0 ATA WL2000GSA1672 {WOL240094387}
> â ââsdb: [8:16] Partitioned (gpt) 1.82t
> â ââsdb1: [8:17] MD raid6 (6) 1.82t inactive 'billsshack:3' {751fd632-4baf- 7d64-ec26-eec1046592bc}
> â ââsdb2: [8:18] Empty/Unknown 1.00g
> ââscsi 2:x:x:x [Empty]
> ââscsi 3:x:x:x [Empty]
> ââscsi 4:0:0:0 ATA WL2000GSA1672 {WOL240094356}
> â ââsdc: [8:32] Partitioned (gpt) 1.82t
> â ââsdc1: [8:33] MD raid6 (6) 1.82t inactive 'BILLSSHACK:0' {3bcf45ef-da3e- de81-09a8-d6635b58832d}
> â ââsdc2: [8:34] Empty/Unknown 1.00g
> ââscsi 5:0:0:0 ATA WL2000GSA3272C {WOL240146377}
> â ââsdd: [8:48] Partitioned (gpt) 1.82t
> â ââsdd1: [8:49] MD raid6 (6) 1.82t inactive 'BILLSSHACK:0' {3bcf45ef-da3e- de81-09a8-d6635b58832d}
> â ââsdd2: [8:50] Empty/Unknown 1.00g
> ââscsi 6:x:x:x [Empty]
> ââscsi 7:x:x:x [Empty]
> PCI [sata_mv] 03:02.0 SCSI storage controller: Marvell Technology Group Ltd. MV8 8SX6081 8-port SATA II PCI-X Controller (rev 09)
> ââscsi 8:0:0:0 ATA WL2000GSA3272 {WOL240102099}
> â ââsde: [8:64] Partitioned (gpt) 1.82t
> â ââsde1: [8:65] MD raid6 (6) 1.82t inactive 'billsshack:3' {751fd632-4baf- 7d64-ec26-eec1046592bc}
> â ââsde2: [8:66] Empty/Unknown 1.00g
> ââscsi 9:x:x:x [Empty]
> ââscsi 10:0:0:0 ATA Hitachi HDS72202 {JK1171YAGDDUGS}
> â ââsdf: [8:80] Partitioned (gpt) 1.82t
> â ââsdf1: [8:81] MD raid6 (6) 1.82t inactive 'billsshack:3' {751fd632-4baf- 7d64-ec26-eec1046592bc}
> â ââsdf2: [8:82] Empty/Unknown 1.00g
> ââscsi 11:0:0:0 ATA ST32000542AS {6XW1B8WP}
> â ââsdg: [8:96] Partitioned (gpt) 1.82t
> â ââsdg1: [8:97] MD raid6 (6) 1.82t inactive 'billsshack:3' {751fd632-4baf- 7d64-ec26-eec1046592bc}
> â ââsdg2: [8:98] Empty/Unknown 1.00g
> ââscsi 12:0:0:0 ATA WL2000GSA3272 {WOL240102098}
> â ââsdh: [8:112] Partitioned (gpt) 1.82t
> â ââsdh1: [8:113] MD raid6 (6) 1.82t inactive 'BILLSSHACK:0' {3bcf45ef-da3e -de81-09a8-d6635b58832d}
> â ââsdh2: [8:114] Empty/Unknown 1.00g
> ââscsi 13:0:0:0 ATA WDC WD20EADS-00S {WD-WCAVY3323665}
> â ââsdi: [8:128] Partitioned (gpt) 1.82t
> â ââsdi1: [8:129] MD raid6 (6) 1.82t inactive 'BILLSSHACK:0' {3bcf45ef-da3e -de81-09a8-d6635b58832d}
> â ââsdi2: [8:130] Empty/Unknown 1.00g
> ââscsi 14:0:0:0 ATA WDC WD20EADS-00S {WD-WCAVY3356471}
> â ââsdj: [8:144] Partitioned (gpt) 1.82t
> â ââsdj1: [8:145] MD raid6 (6) 1.82t inactive 'BILLSSHACK:0' {3bcf45ef-da3e -de81-09a8-d6635b58832d}
> â ââsdj2: [8:146] Empty/Unknown 1.00g
> ââscsi 15:x:x:x [Empty]
> PCI [ahci] 00:1f.2 SATA controller: Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH ) 6 port SATA AHCI Controller (rev 02)
> ââscsi 16:x:x:x [Empty]
> ââscsi 17:x:x:x [Empty]
> ââscsi 18:0:0:0 ATA WDC WD20EADS-00S {WD-WCAVY3286477}
> â ââsdk: [8:160] Partitioned (gpt) 1.82t
> â ââsdk1: [8:161] MD raid6 (6) 1.82t inactive 'billsshack:3' {751fd632-4baf -7d64-ec26-eec1046592bc}
> â ââsdk2: [8:162] Empty/Unknown 1.00g
> ââscsi 19:0:0:0 ATA WDC WD400BD-60LT {WD-WMAME1857443}
> â ââsdl: [8:176] Partitioned (dos) 37.27g
> â ââsdl1: [8:177] Partitioned (dos) 133.32m {617e6b03-6cc1-4974-8acc-19a742 cd06f4}
> â â ââMounted as /dev/sdl1 @ /boot
> â ââsdl2: [8:178] PV LVM2_member 22.00g/37.14g VG vgDeb 37.14g {0ecAby-vT4R -IHpt-5kEK-v1W0-u6AL-bUcFDG}
> â ââVolume Group vgDeb (sdl2) 15.14g free {a6nM6t-LPKY-gMdi-9YP4-t2tv-H qjv-aMgmEa}
> â ââdm-1: [251:1] LV 'root' (reiserfs) 10.00g {f497b9bd-e24f-4d20-9f6 b-282b6493f341}
> â â ââMounted as /dev/mapper/vgDeb-root @ /mnt/debian
> â ââdm-2: [251:2] LV 'rootu' (reiserfs) 10.00g {bc0f350c-4af7-4192-b1 ea-4528101db28e}
> â â ââMounted as /dev/mapper/vgDeb-rootu @ /
> â ââdm-0: [251:0] LV 'swap' (swap) 2.00g {13264e60-a1d1-4bf3-af9f-edd 948212a63}
> ââscsi 20:0:0:0 ATA WDC WD20EADS-00S {WD-WCAVY3271728}
> â ââsdm: [8:192] MD raid6 (4) 1.82t inactive {b52df561-301f-043c-b5a7-b1ab9d93 4878}
> â ââsdm1: [8:193] MD raid6 (6) 1.82t inactive 'billsshack:3' {751fd632-4baf -7d64-ec26-eec1046592bc}
> â ââsdm2: [8:194] MD raid6 (6) 1.00g inactive 'billsshack:2' {cf2450e4-bc7a -9cea-cb5c-cda96f3ec87c}
> ââscsi 21:x:x:x [Empty]
> Other Block Devices
> ââram0: [1:0] Empty/Unknown 64.00m
> ââram1: [1:1] Empty/Unknown 64.00m
> ââram2: [1:2] Empty/Unknown 64.00m
> ââram3: [1:3] Empty/Unknown 64.00m
> ââram4: [1:4] Empty/Unknown 64.00m
> ââram5: [1:5] Empty/Unknown 64.00m
> ââram6: [1:6] Empty/Unknown 64.00m
> ââram7: [1:7] Empty/Unknown 64.00m
> ââram8: [1:8] Empty/Unknown 64.00m
> ââram9: [1:9] Empty/Unknown 64.00m
> ââram10: [1:10] Empty/Unknown 64.00m
> ââram11: [1:11] Empty/Unknown 64.00m
> ââram12: [1:12] Empty/Unknown 64.00m
> ââram13: [1:13] Empty/Unknown 64.00m
> ââram14: [1:14] Empty/Unknown 64.00m
> ââram15: [1:15] Empty/Unknown 64.00m
OK. We know the basic layout. (I think I'm going to have to give up the fancy line-drawing characters. Very few emails get them right...)
> mdadm --detail /dev/md0 /dev/md1
> /dev/md0:
> Version : 1.2
> Creation Time : Tue Jun 21 11:23:46 2011
> Raid Level : raid6
> Array Size : 7809854976 (7448.06 GiB 7997.29 GB)
> Used Dev Size : 1952463744 (1862.01 GiB 1999.32 GB)
> Raid Devices : 6
> Total Devices : 6
> Persistence : Superblock is persistent
>
> Update Time : Tue Jun 21 11:23:46 2011
> State : clean
> Active Devices : 6
> Working Devices : 6
> Failed Devices : 0
> Spare Devices : 0
>
> Layout : left-symmetric
> Chunk Size : 128K
>
> Name : BILLSSHACK:0 (local to host BILLSSHACK)
> UUID : 3bcf45ef:da3ede81:09a8d663:5b58832d
> Events : 0
>
> Number Major Minor RaidDevice State
> 0 8 1 0 active sync /dev/sda1
> 1 8 33 1 active sync /dev/sdc1
> 2 8 49 2 active sync /dev/sdd1
> 3 8 113 3 active sync /dev/sdh1
> 4 8 129 4 active sync /dev/sdi1
> 5 8 145 5 active sync /dev/sdj1
> /dev/md1:
> Version : 1.2
> Creation Time : Tue Jun 15 09:59:08 2010
> Raid Level : raid6
> Array Size : 7809855488 (7448.06 GiB 7997.29 GB)
> Used Dev Size : 1952463872 (1862.01 GiB 1999.32 GB)
> Raid Devices : 6
> Total Devices : 6
> Persistence : Superblock is persistent
>
> Intent Bitmap : Internal
>
> Update Time : Tue Jun 14 20:58:11 2011
> State : active
> Active Devices : 6
> Working Devices : 6
> Failed Devices : 0
> Spare Devices : 0
>
> Layout : left-symmetric
> Chunk Size : 128K
>
> Name : billsshack:3
> UUID : 751fd632:4baf7d64:ec26eec1:046592bc
> Events : 235082
>
> Number Major Minor RaidDevice State
> 0 8 161 0 active sync /dev/sdk1
> 1 8 17 1 active sync /dev/sdb1
> 8 8 193 2 active sync /dev/sdm1
> 5 8 81 3 active sync /dev/sdf1
> 6 8 65 4 active sync /dev/sde1
> 7 8 97 5 active sync /dev/sdg1
It is not immediately clear from this, or from your first email, which of your two raid6 arrays is messed up.
> mdadm --examine /dev/sd[abcdefghijkm]1
> /dev/sda1:
> Magic : a92b4efc
> Version : 1.2
> Feature Map : 0x0
> Array UUID : 3bcf45ef:da3ede81:09a8d663:5b58832d
> Name : BILLSSHACK:0 (local to host BILLSSHACK)
> Creation Time : Tue Jun 21 11:23:46 2011
> Raid Level : raid6
> Raid Devices : 6
>
> Avail Dev Size : 3904927887 (1862.01 GiB 1999.32 GB)
> Array Size : 15619709952 (7448.06 GiB 7997.29 GB)
> Used Dev Size : 3904927488 (1862.01 GiB 1999.32 GB)
> Data Offset : 2048 sectors
> Super Offset : 8 sectors
> State : clean
> Device UUID : b6569293:759c2fbe:d52f346f:2ea85306
>
> Update Time : Tue Jun 21 16:35:23 2011
> Checksum : 9d90c200 - correct
> Events : 2
>
> Layout : left-symmetric
> Chunk Size : 128K
>
> Device Role : Active device 0
> Array State : AAAAAA ('A' == active, '.' == missing)
2048 sectors offset. This is important.
> /dev/sdb1:
> Magic : a92b4efc
> Version : 1.2
> Feature Map : 0x1
> Array UUID : 751fd632:4baf7d64:ec26eec1:046592bc
> Name : billsshack:3
> Creation Time : Tue Jun 15 09:59:08 2010
> Raid Level : raid6
> Raid Devices : 6
>
> Avail Dev Size : 3904927887 (1862.01 GiB 1999.32 GB)
> Array Size : 15619710976 (7448.06 GiB 7997.29 GB)
> Used Dev Size : 3904927744 (1862.01 GiB 1999.32 GB)
> Data Offset : 2048 sectors
> Super Offset : 8 sectors
> State : clean
> Device UUID : 0b6a5d24:bff1c401:259c054b:ca7ccc7f
>
> Internal Bitmap : 2 sectors from superblock
> Update Time : Tue Jun 21 16:35:23 2011
> Checksum : bed09424 - correct
> Events : 235082
>
> Layout : left-symmetric
> Chunk Size : 128K
>
> Device Role : Active device 1
> Array State : AAAAAA ('A' == active, '.' == missing)
> /dev/sdc1:
> Magic : a92b4efc
> Version : 1.2
> Feature Map : 0x0
> Array UUID : 3bcf45ef:da3ede81:09a8d663:5b58832d
> Name : BILLSSHACK:0 (local to host BILLSSHACK)
> Creation Time : Tue Jun 21 11:23:46 2011
> Raid Level : raid6
> Raid Devices : 6
>
> Avail Dev Size : 3904927887 (1862.01 GiB 1999.32 GB)
> Array Size : 15619709952 (7448.06 GiB 7997.29 GB)
> Used Dev Size : 3904927488 (1862.01 GiB 1999.32 GB)
> Data Offset : 2048 sectors
> Super Offset : 8 sectors
> State : clean
> Device UUID : 298ee23a:4968978c:0557d189:1c45ff55
>
> Update Time : Tue Jun 21 16:35:23 2011
> Checksum : 7d918966 - correct
> Events : 2
>
> Layout : left-symmetric
> Chunk Size : 128K
>
> Device Role : Active device 1
> Array State : AAAAAA ('A' == active, '.' == missing)
> /dev/sdd1:
> Magic : a92b4efc
> Version : 1.2
> Feature Map : 0x0
> Array UUID : 3bcf45ef:da3ede81:09a8d663:5b58832d
> Name : BILLSSHACK:0 (local to host BILLSSHACK)
> Creation Time : Tue Jun 21 11:23:46 2011
> Raid Level : raid6
> Raid Devices : 6
>
> Avail Dev Size : 3904927887 (1862.01 GiB 1999.32 GB)
> Array Size : 15619709952 (7448.06 GiB 7997.29 GB)
> Used Dev Size : 3904927488 (1862.01 GiB 1999.32 GB)
> Data Offset : 2048 sectors
> Super Offset : 8 sectors
> State : clean
> Device UUID : 2b8ef61c:d3d59200:659d4367:573a992d
>
> Update Time : Tue Jun 21 16:35:23 2011
> Checksum : 88ad328d - correct
> Events : 2
>
> Layout : left-symmetric
> Chunk Size : 128K
>
> Device Role : Active device 2
> Array State : AAAAAA ('A' == active, '.' == missing)
> /dev/sde1:
> Magic : a92b4efc
> Version : 1.2
> Feature Map : 0x1
> Array UUID : 751fd632:4baf7d64:ec26eec1:046592bc
> Name : billsshack:3
> Creation Time : Tue Jun 15 09:59:08 2010
> Raid Level : raid6
> Raid Devices : 6
>
> Avail Dev Size : 3904927887 (1862.01 GiB 1999.32 GB)
> Array Size : 15619710976 (7448.06 GiB 7997.29 GB)
> Used Dev Size : 3904927744 (1862.01 GiB 1999.32 GB)
> Data Offset : 2048 sectors
> Super Offset : 8 sectors
> State : clean
> Device UUID : 1de7b73d:73ff1f80:c0903159:87fc8287
>
> Internal Bitmap : 2 sectors from superblock
> Update Time : Tue Jun 21 16:35:23 2011
> Checksum : 6c689348 - correct
> Events : 235082
>
> Layout : left-symmetric
> Chunk Size : 128K
>
> Device Role : Active device 4
> Array State : AAAAAA ('A' == active, '.' == missing)
> /dev/sdf1:
> Magic : a92b4efc
> Version : 1.2
> Feature Map : 0x1
> Array UUID : 751fd632:4baf7d64:ec26eec1:046592bc
> Name : billsshack:3
> Creation Time : Tue Jun 15 09:59:08 2010
> Raid Level : raid6
> Raid Devices : 6
>
> Avail Dev Size : 3904927887 (1862.01 GiB 1999.32 GB)
> Array Size : 15619710976 (7448.06 GiB 7997.29 GB)
> Used Dev Size : 3904927744 (1862.01 GiB 1999.32 GB)
> Data Offset : 2048 sectors
> Super Offset : 8 sectors
> State : clean
> Device UUID : 3247b37b:aa741693:0d58ccb6:b814a5d1
>
> Internal Bitmap : 2 sectors from superblock
> Update Time : Tue Jun 21 16:35:23 2011
> Checksum : 65174812 - correct
> Events : 235082
>
> Layout : left-symmetric
> Chunk Size : 128K
>
> Device Role : Active device 3
> Array State : AAAAAA ('A' == active, '.' == missing)
> /dev/sdg1:
> Magic : a92b4efc
> Version : 1.2
> Feature Map : 0x1
> Array UUID : 751fd632:4baf7d64:ec26eec1:046592bc
> Name : billsshack:3
> Creation Time : Tue Jun 15 09:59:08 2010
> Raid Level : raid6
> Raid Devices : 6
>
> Avail Dev Size : 3904927887 (1862.01 GiB 1999.32 GB)
> Array Size : 15619710976 (7448.06 GiB 7997.29 GB)
> Used Dev Size : 3904927744 (1862.01 GiB 1999.32 GB)
> Data Offset : 2048 sectors
> Super Offset : 8 sectors
> State : clean
> Device UUID : 8483b8f7:8c3302c6:9403a3e9:3c3a094c
>
> Internal Bitmap : 2 sectors from superblock
> Update Time : Tue Jun 21 16:35:23 2011
> Checksum : c1431453 - correct
> Events : 235082
>
> Layout : left-symmetric
> Chunk Size : 128K
>
> Device Role : Active device 5
> Array State : AAAAAA ('A' == active, '.' == missing)
> /dev/sdh1:
> Magic : a92b4efc
> Version : 1.2
> Feature Map : 0x0
> Array UUID : 3bcf45ef:da3ede81:09a8d663:5b58832d
> Name : BILLSSHACK:0 (local to host BILLSSHACK)
> Creation Time : Tue Jun 21 11:23:46 2011
> Raid Level : raid6
> Raid Devices : 6
>
> Avail Dev Size : 3904927887 (1862.01 GiB 1999.32 GB)
> Array Size : 15619709952 (7448.06 GiB 7997.29 GB)
> Used Dev Size : 3904927488 (1862.01 GiB 1999.32 GB)
> Data Offset : 2048 sectors
> Super Offset : 8 sectors
> State : clean
> Device UUID : 1abe5069:570c484e:2199572d:cd0d641f
>
> Update Time : Tue Jun 21 16:35:23 2011
> Checksum : da9b6833 - correct
> Events : 2
>
> Layout : left-symmetric
> Chunk Size : 128K
>
> Device Role : Active device 3
> Array State : AAAAAA ('A' == active, '.' == missing)
> /dev/sdi1:
> Magic : a92b4efc
> Version : 1.2
> Feature Map : 0x0
> Array UUID : 3bcf45ef:da3ede81:09a8d663:5b58832d
> Name : BILLSSHACK:0 (local to host BILLSSHACK)
> Creation Time : Tue Jun 21 11:23:46 2011
> Raid Level : raid6
> Raid Devices : 6
>
> Avail Dev Size : 3904927887 (1862.01 GiB 1999.32 GB)
> Array Size : 15619709952 (7448.06 GiB 7997.29 GB)
> Used Dev Size : 3904927488 (1862.01 GiB 1999.32 GB)
> Data Offset : 2048 sectors
> Super Offset : 8 sectors
> State : clean
> Device UUID : 083fa0cc:798c09bb:febe5adf:f99eb7b8
>
> Update Time : Tue Jun 21 16:35:23 2011
> Checksum : f603204f - correct
> Events : 2
>
> Layout : left-symmetric
> Chunk Size : 128K
>
> Device Role : Active device 4
> Array State : AAAAAA ('A' == active, '.' == missing)
> /dev/sdj1:
> Magic : a92b4efc
> Version : 1.2
> Feature Map : 0x0
> Array UUID : 3bcf45ef:da3ede81:09a8d663:5b58832d
> Name : BILLSSHACK:0 (local to host BILLSSHACK)
> Creation Time : Tue Jun 21 11:23:46 2011
> Raid Level : raid6
> Raid Devices : 6
>
> Avail Dev Size : 3904927887 (1862.01 GiB 1999.32 GB)
> Array Size : 15619709952 (7448.06 GiB 7997.29 GB)
> Used Dev Size : 3904927488 (1862.01 GiB 1999.32 GB)
> Data Offset : 2048 sectors
> Super Offset : 8 sectors
> State : clean
> Device UUID : b91a18a4:8aba8cf8:bf046c8f:f683cf16
>
> Update Time : Tue Jun 21 16:35:23 2011
> Checksum : 192754d0 - correct
> Events : 2
>
> Layout : left-symmetric
> Chunk Size : 128K
>
> Device Role : Active device 5
> Array State : AAAAAA ('A' == active, '.' == missing)
> /dev/sdk1:
> Magic : a92b4efc
> Version : 1.2
> Feature Map : 0x1
> Array UUID : 751fd632:4baf7d64:ec26eec1:046592bc
> Name : billsshack:3
> Creation Time : Tue Jun 15 09:59:08 2010
> Raid Level : raid6
> Raid Devices : 6
>
> Avail Dev Size : 3904927887 (1862.01 GiB 1999.32 GB)
> Array Size : 15619710976 (7448.06 GiB 7997.29 GB)
> Used Dev Size : 3904927744 (1862.01 GiB 1999.32 GB)
> Data Offset : 2048 sectors
> Super Offset : 8 sectors
> State : clean
> Device UUID : 77b91aac:68eed1cc:e2404d60:f5962abc
>
> Internal Bitmap : 2 sectors from superblock
> Update Time : Tue Jun 21 16:35:23 2011
> Checksum : 63409f22 - correct
> Events : 235082
>
> Layout : left-symmetric
> Chunk Size : 128K
>
> Device Role : Active device 0
> Array State : AAAAAA ('A' == active, '.' == missing)
> /dev/sdm1:
> Magic : a92b4efc
> Version : 1.2
> Feature Map : 0x1
> Array UUID : 751fd632:4baf7d64:ec26eec1:046592bc
> Name : billsshack:3
> Creation Time : Tue Jun 15 09:59:08 2010
> Raid Level : raid6
> Raid Devices : 6
>
> Avail Dev Size : 3904927887 (1862.01 GiB 1999.32 GB)
> Array Size : 15619710976 (7448.06 GiB 7997.29 GB)
> Used Dev Size : 3904927744 (1862.01 GiB 1999.32 GB)
> Data Offset : 2048 sectors
> Super Offset : 8 sectors
> State : clean
> Device UUID : 9b5a6d24:38202032:c40fc666:1f2570d7
>
> Internal Bitmap : 2 sectors from superblock
> Update Time : Tue Jun 21 16:35:23 2011
> Checksum : 629fcf29 - correct
> Events : 235082
>
> Layout : left-symmetric
> Chunk Size : 128K
>
> Device Role : Active device 2
> Array State : AAAAAA ('A' == active, '.' == missing)
They all have a 2048 sector offset. This is good. It means your good array and your bad array were both created with a fairly recent copy of mdadm.
> cat /etc/lvm/backup/*
> # Generated by LVM2 version 2.02.66(2) (2010-05-20): Sat May 21 09:40:02 2011
>
> contents = "Text Format Volume Group"
> version = 1
>
> description = "Created *after* executing 'vgcfgbackup'"
>
> creation_host = "BILLSSHACK" # Linux BILLSSHACK 2.6.35-28-generic-pae #49-Ubuntu SMP Tue Mar 1 14:58:06 UTC 2011 i686
> creation_time = 1305985202 # Sat May 21 09:40:02 2011
>
> vgDeb {
> id = "a6nM6t-LPKY-gMdi-9YP4-t2tv-Hqjv-aMgmEa"
> seqno = 10
> status = ["RESIZEABLE", "READ", "WRITE"]
> flags = []
> extent_size = 8192 # 4 Megabytes
> max_lv = 0
> max_pv = 0
>
> physical_volumes {
>
> pv0 {
> id = "0ecAby-vT4R-IHpt-5kEK-v1W0-u6AL-bUcFDG"
> device = "/dev/sdl2" # Hint only
>
> status = ["ALLOCATABLE"]
> flags = []
> dev_size = 77883120 # 37.1376 Gigabytes
> pe_start = 384
> pe_count = 9507 # 37.1367 Gigabytes
> }
> }
>
> logical_volumes {
>
> swap {
> id = "ywPLVh-84xx-UvPV-fUaI-PPgU-ZiTw-j6rs07"
> status = ["READ", "WRITE", "VISIBLE"]
> flags = []
> segment_count = 1
>
> segment1 {
> start_extent = 0
> extent_count = 512 # 2 Gigabytes
>
> type = "striped"
> stripe_count = 1 # linear
>
> stripes = [
> "pv0", 0
> ]
> }
> }
>
> root {
> id = "x0wMox-eWu2-EmDh-V0Ix-47Zl-iqVc-fTYjVB"
> status = ["READ", "WRITE", "VISIBLE"]
> flags = []
> segment_count = 1
>
> segment1 {
> start_extent = 0
> extent_count = 2560 # 10 Gigabytes
>
> type = "striped"
> stripe_count = 1 # linear
>
> stripes = [
> "pv0", 512
> ]
> }
> }
>
> rootu {
> id = "IFNTXp-4DBN-jjtc-PrCE-Mxs5-68FH-vY5QWK"
> status = ["READ", "WRITE", "VISIBLE"]
> flags = []
> segment_count = 1
>
> segment1 {
> start_extent = 0
> extent_count = 2560 # 10 Gigabytes
>
> type = "striped"
> stripe_count = 1 # linear
>
> stripes = [
> "pv0", 5632
> ]
> }
> }
> }
> }
We won't need the above. That's your system group.
> # Generated by LVM2 version 2.02.66(2) (2010-05-20): Sat May 21 09:40:02 2011
>
> contents = "Text Format Volume Group"
> version = 1
>
> description = "Created *after* executing 'vgcfgbackup'"
>
> creation_host = "BILLSSHACK" # Linux BILLSSHACK 2.6.35-28-generic-pae #49-Ubuntu SMP Tue Mar 1 14:58:06 UTC 2011 i686
> creation_time = 1305985202 # Sat May 21 09:40:02 2011
>
> vgRAID60 {
> id = "RUodpI-jXin-DEjs-fS3R-4hjA-hQ4l-41dAaJ"
> seqno = 58
> status = ["RESIZEABLE", "READ", "WRITE"]
> flags = []
> extent_size = 8192 # 4 Megabytes
> max_lv = 0
> max_pv = 0
>
> physical_volumes {
>
> pv0 {
> id = "V5c1yI-XD3D-FXlB-VDYd-tUfo-eIUC-hXQfVW"
> device = "/dev/md0" # Hint only
>
> status = ["ALLOCATABLE"]
> flags = []
> dev_size = 15619710464 # 7.27349 Terabytes
> pe_start = 512
> pe_count = 1906702 # 7.27349 Terabytes
> }
>
> pv1 {
> id = "XRg0sr-C21F-4v2f-9FzZ-JSQr-JZe2-Tbexqf"
> device = "/dev/md1" # Hint only
>
> status = ["ALLOCATABLE"]
> flags = []
> dev_size = 15619710464 # 7.27349 Terabytes
> pe_start = 512
> pe_count = 1906702 # 7.27349 Terabytes
> }
> }
We can go hunting for these ids. They'll certainly show up on the first disk in each raid. They might also show up in the 5th disk in each raid, if by chance the other data chunks in the stripe are zeros. Please show:
for x in /dev/sd[abcdefghijkm] ; do echo "**** $x ****" ; dd if=$x skip=2056 count=2 2>/dev/null |strings ; done
> logical_volumes {
>
> data0 {
> id = "Odw7bi-10cB-8SSX-Y4BL-wqs6-4qWT-3AHaYL"
> status = ["READ", "WRITE", "VISIBLE"]
> flags = []
> segment_count = 1
>
> segment1 {
> start_extent = 0
> extent_count = 3670016 # 14 Terabytes
>
> type = "striped"
> stripe_count = 2
> stripe_size = 128 # 64 Kilobytes
>
> stripes = [
> "pv0", 0,
> "pv1", 0
> ]
> }
> }
> }
> }
Hmmm. Striped LVM on top of raid6. Trying to get better performance? With this striping, getting the data back requires 100% success. You won't get half of your data. You might get all of it, or you might get none of it.
> Again, i am not a raid pro. those commands i can understand, if anywhere i make a mistake about a mdadm concept, please tell me.
>
> Nevertheless, thanks for you help
You are welcome.
Phil
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 11+ messages in thread
* RE: reaid problem at reboot
2011-06-21 21:58 ` Phil Turmel
@ 2011-06-22 1:00 ` william L'Heureux
2011-06-22 1:34 ` Phil Turmel
0 siblings, 1 reply; 11+ messages in thread
From: william L'Heureux @ 2011-06-22 1:00 UTC (permalink / raw)
To: philip; +Cc: linux-raid
for x in /dev/sd[abcdefghijkm] ; do echo "**** $ x ****" ; dd if=$x skip=2056 count=2 2>/dev/null |strings ; done
**** /dev/sda ****
-BILLSSHACK:0
/4o.
**** /dev/sdb ****
billsshack:3
**** /dev/sdc ****
-BILLSSHACK:0
**** /dev/sdd ****
-BILLSSHACK:0
CgW:
**** /dev/sde ****
billsshack:3
**** /dev/sdf ****
billsshack:3
**** /dev/sdg ****
billsshack:3
<: L
**** /dev/sdh ****
-BILLSSHACK:0
**** /dev/sdi ****
-BILLSSHACK:0
**** /dev/sdj ****
-BILLSSHACK:0
**** /dev/sdk ****
billsshack:3
**** /dev/sdm ****
billsshack:3
Zm$8 2
Thanks
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: reaid problem at reboot
2011-06-22 1:00 ` william L'Heureux
@ 2011-06-22 1:34 ` Phil Turmel
2011-06-22 1:50 ` william L'Heureux
0 siblings, 1 reply; 11+ messages in thread
From: Phil Turmel @ 2011-06-22 1:34 UTC (permalink / raw)
To: william L'Heureux; +Cc: linux-raid
Hi William,
On 06/21/2011 09:00 PM, william L'Heureux wrote:
>
> for x in /dev/sd[abcdefghijkm] ; do echo "**** $x ****" ; dd if=$x skip=2056 count=2 2>/dev/null |strings ; done
> **** /dev/sda ****
> -BILLSSHACK:0
> /4o.
> **** /dev/sdb ****
> billsshack:3
> **** /dev/sdc ****
> -BILLSSHACK:0
> **** /dev/sdd ****
> -BILLSSHACK:0
> CgW:
> **** /dev/sde ****
> billsshack:3
> **** /dev/sdf ****
> billsshack:3
> **** /dev/sdg ****
> billsshack:3
> <: L
> **** /dev/sdh ****
> -BILLSSHACK:0
> **** /dev/sdi ****
> -BILLSSHACK:0
> **** /dev/sdj ****
> -BILLSSHACK:0
> **** /dev/sdk ****
> billsshack:3
> **** /dev/sdm ****
> billsshack:3
> Zm$8 2
Hmmm. Not what I expected. Ah! Missing "1". Try both of these instead:
for x in /dev/sd[abcdefghijkm]1 ; do echo "**** $x ****" ; dd if=$x skip=2056 count=2 2>/dev/null |strings ; done
for x in /dev/sd[abcdefghijkm]1 ; do echo "**** $x ****" ; dd if=$x skip=2048 count=1 2>/dev/null |hexdump -C ; done
Phil
^ permalink raw reply [flat|nested] 11+ messages in thread
* RE: reaid problem at reboot
2011-06-22 1:34 ` Phil Turmel
@ 2011-06-22 1:50 ` william L'Heureux
2011-06-22 2:30 ` Phil Turmel
0 siblings, 1 reply; 11+ messages in thread
From: william L'Heureux @ 2011-06-22 1:50 UTC (permalink / raw)
To: philip; +Cc: linux-raid
for x in /dev/sd[abcdefghijkm]1 ; do echo "**** $x ****" ; dd if=$x skip=2056 count=2 2>/dev/null |strings ; done
**** /dev/sda1 ****
**** /dev/sdb1 ****
LVM2 x[5A%r0N*>
vgRAID60 {
id = "RUodpI-jXin-DEjs-fS3R-4hjA-hQ4l-41dAaJ"
seqno = 19
status = ["RESIZEABLE", "READ", "WRITE"]
flags = []
extent_size = 8192
max_lv = 0
max_pv = 0
physical_volumes {
pv0 {
id = "V5c1yI-XD3D-FXlB-VDYd-tUfo-eIUC-hXQfVW"
device = "/dev/md1"
status = ["ALLOCATABLE"]
flags = []
dev_size = 7799999488
pe_start = 512
pe_count = 952148
pv1 {
id = "BzZcan-uC9b-MQne-caBo-tYp3-9xkA-NnGuBE"
device = "/dev/md2"
status = ["ALLOCATABLE"]
flags = []
dev_size = 7799999488
pe_start = 512
pe_count = 952148
**** /dev/sdc1 ****
**** /dev/sdd1 ****
!HV]3$xK
M1N+>
vgRQID6 {
*= "RUodpI-jXin-D<
s-fC3R-$hjA=hQ4l-41dAaJ"
seqnoV
ctates =0["RESIZEABLE", "RE7
", 2WRIDE"]
jags = []
extent_
ze - 81)2
mqx_lv = 0
max_pv = F
phisicql_v
6wmes {
pv0 {
&"V5s1yI=XD3T-FXlB-VDYd-tUfo-eI#
-hXAfVW2
defice = "/dev/md1"
atuc = K"AL\
@ATABLE"]
flags =
duv_syze -V2814057984
pe_sta
= %12
`e_c
zjt = 953864
pv6[{
it = 2BzZs
n-uC9b-MQne-caBo-
Np3-)xkA=NnGeUA"
device = "/dev
stqtus0E"["ALLOCATABLE"]
ags0= [M
pize = 7814057984Z
e_sdart0= 5!q
pe_count = 95386
**** /dev/sde1 ****
**** /dev/sdf1 ****
**** /dev/sdg1 ****
LV]2 xK
A%r0N*>
vgRQID6 {
/= "RUodpI-jXin-Dz
s-fC3R-$hjA=hQ4l-41dAaJ"
seqnoV
stadus - ["RESIZEABLE", "R3
D",0"WRYTE"M
flags = []
extent)
ize0= 8!92
y_lv = 0
max_pv =.
pxysisal_f
mumes {
pv0 {
"V%c1yY-XD#D-FXlB-VDYd-tUfo-e?
C-hHQfVG"
duvice = "/dev/md1"
tates =0["A\LOCATABLE"]
flags K
tev_cize0= 7799999488
pe_st
t =0512
pe_sount = 952148
yd =0"BzJcan-uC9b-MQne-caBo[
Yp3=9xkQ-NnW
device = "/de
md22
sdatuc = ["ALLOCATABLE"]|
lagc = K]
duv_size = 779999948N
pe_ctard = %12
pe_count = 9521B
**** /dev/sdh1 ****
**** /dev/sdi1 ****
"RiodpI-jXin-
dAaJ"
seqn
a82j
SIfEABLE", "R
extent
max_pv =
pv0 {
B-jDYd-tUfo-e
/dev/md1"
ABpE"]
flags
40 7984
pe_st
53864
C9^-MQne-caBo
deJice = "/de
q82B
ALpOCATABLE"]
781405798
7m!_
_cSunt = 9538
**** /dev/sdj1 ****
LVM2 x[5A%r0N*>
vgRAID60 {
id = "RUodpI-jXin-DEjs-fS3R-4hjA-hQ4l-41dAaJ"
seqno = 2
status = ["RESIZEABLE", "READ", "WRITE"]
flags = []
extent_size = 8192
max_lv = 0
max_pv = 0
physical_volumes {
pv0 {
id = "V5c1yI-XD3D-FXlB-VDYd-tUfo-eIUC-hXQfVW"
device = "/dev/md1"
status = ["ALLOCATABLE"]
flags = []
dev_size = 7814057984
pe_start = 512
pe_count = 953864
pv1 {
id = "BzZcan-uC9b-MQne-caBo-tYp3-9xkA-NnGuBE"
device = "/dev/md2"
status = ["ALLOCATABLE"]
flags = []
dev_size = 7814057984
pe_start = 512
pe_count = 953864
**** /dev/sdk1 ****
"RiodpI-jXin-
dAaJ"
seqn
.[81Y
ESuZEABLE", "
= []
exten
max_pv
pv0 {
VDYd-tUfo-
"/dev/md1"
TA~LE"]
flags
99488
pe_s
952148
b-MQne-caB
6 pS
dYvice = "/d
"ApLOCATABLE"
_<$a
= 77999994
e__ount = 952
**** /dev/sdm1 ****
for x in /dev/sd[abcdefghijkm]1 ; do echo "**** $x ****" ; dd if=$x skip=2048 count=1 2>/dev/null |hexdump -C ; done
**** /dev/sda1 ****
00000000 00 00 c0 0f 10 00 c0 0f 20 00 c0 0f e0 02 00 20 |........ ...... |
00000010 00 00 05 00 00 00 00 00 00 00 00 00 00 20 92 26 |............. .&|
00000020 01 00 c0 0f 11 00 c0 0f 20 02 c0 0f 00 00 00 20 |........ ...... |
00000030 00 00 05 00 00 00 00 00 00 00 00 00 00 20 df 10 |............. ..|
00000040 02 00 c0 0f 12 00 c0 0f 20 04 c0 0f 00 00 00 20 |........ ...... |
00000050 00 00 05 00 00 00 00 00 00 00 00 00 00 20 b1 b1 |............. ..|
00000060 03 00 c0 0f 13 00 c0 0f 20 06 c0 0f 00 00 00 20 |........ ...... |
00000070 00 00 05 00 00 00 00 00 00 00 00 00 00 20 6a 11 |............. j.|
00000080 04 00 c0 0f 14 00 c0 0f 20 08 c0 0f 00 00 00 20 |........ ...... |
00000090 00 00 05 00 00 00 00 00 00 00 00 00 00 20 6e b3 |............. n.|
000000a0 05 00 c0 0f 15 00 c0 0f 20 0a c0 0f 00 00 00 20 |........ ...... |
000000b0 00 00 05 00 00 00 00 00 00 00 00 00 00 20 b5 13 |............. ..|
000000c0 06 00 c0 0f 16 00 c0 0f 20 0c c0 0f 00 00 00 20 |........ ...... |
000000d0 00 00 05 00 00 00 00 00 00 00 00 00 00 20 db b2 |............. ..|
000000e0 07 00 c0 0f 17 00 c0 0f 20 0e c0 0f 00 00 00 20 |........ ...... |
000000f0 00 00 05 00 00 00 00 00 00 00 00 00 00 20 00 12 |............. ..|
00000100 08 00 c0 0f 18 00 c0 0f 20 10 c0 0f 00 00 00 20 |........ ...... |
00000110 00 00 05 00 00 00 00 00 00 00 00 00 00 20 d0 b6 |............. ..|
00000120 09 00 c0 0f 19 00 c0 0f 20 12 c0 0f 00 00 00 20 |........ ...... |
00000130 00 00 05 00 00 00 00 00 00 00 00 00 00 20 0b 16 |............. ..|
00000140 0a 00 c0 0f 1a 00 c0 0f 20 14 c0 0f 00 00 00 20 |........ ...... |
00000150 00 00 05 00 00 00 00 00 00 00 00 00 00 20 65 b7 |............. e.|
00000160 0b 00 c0 0f 1b 00 c0 0f 20 16 c0 0f 00 00 00 20 |........ ...... |
00000170 00 00 05 00 00 00 00 00 00 00 00 00 00 20 be 17 |............. ..|
00000180 0c 00 c0 0f 1c 00 c0 0f 20 18 c0 0f 00 00 00 20 |........ ...... |
00000190 00 00 05 00 00 00 00 00 00 00 00 00 00 20 ba b5 |............. ..|
000001a0 0d 00 c0 0f 1d 00 c0 0f 20 1a c0 0f 00 00 00 20 |........ ...... |
000001b0 00 00 05 00 00 00 00 00 00 00 00 00 00 20 61 15 |............. a.|
000001c0 0e 00 c0 0f 1e 00 c0 0f 20 1c c0 0f 00 00 00 20 |........ ...... |
000001d0 00 00 05 00 00 00 00 00 00 00 00 00 00 20 0f b4 |............. ..|
000001e0 0f 00 c0 0f 1f 00 c0 0f 20 1e c0 0f 00 00 00 20 |........ ...... |
000001f0 00 00 05 00 00 00 00 00 00 00 00 00 00 20 d4 14 |............. ..|
00000200
**** /dev/sdb1 ****
00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00000200
**** /dev/sdc1 ****
00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00000200
**** /dev/sdd1 ****
00000000 00 00 c0 0f 10 00 c0 0f 20 00 c0 0f e0 02 00 20 |........ ...... |
00000010 00 00 05 00 00 00 00 00 00 00 00 00 00 20 92 26 |............. .&|
00000020 01 00 c0 0f 11 00 c0 0f 20 02 c0 0f 00 00 00 20 |........ ...... |
00000030 00 00 05 00 00 00 00 00 00 00 00 00 00 20 df 10 |............. ..|
00000040 02 00 c0 0f 12 00 c0 0f 20 04 c0 0f 00 00 00 20 |........ ...... |
00000050 00 00 05 00 00 00 00 00 00 00 00 00 00 20 b1 b1 |............. ..|
00000060 03 00 c0 0f 13 00 c0 0f 20 06 c0 0f 00 00 00 20 |........ ...... |
00000070 00 00 05 00 00 00 00 00 00 00 00 00 00 20 6a 11 |............. j.|
00000080 04 00 c0 0f 14 00 c0 0f 20 08 c0 0f 00 00 00 20 |........ ...... |
00000090 00 00 05 00 00 00 00 00 00 00 00 00 00 20 6e b3 |............. n.|
000000a0 05 00 c0 0f 15 00 c0 0f 20 0a c0 0f 00 00 00 20 |........ ...... |
000000b0 00 00 05 00 00 00 00 00 00 00 00 00 00 20 b5 13 |............. ..|
000000c0 06 00 c0 0f 16 00 c0 0f 20 0c c0 0f 00 00 00 20 |........ ...... |
000000d0 00 00 05 00 00 00 00 00 00 00 00 00 00 20 db b2 |............. ..|
000000e0 07 00 c0 0f 17 00 c0 0f 20 0e c0 0f 00 00 00 20 |........ ...... |
000000f0 00 00 05 00 00 00 00 00 00 00 00 00 00 20 00 12 |............. ..|
00000100 08 00 c0 0f 18 00 c0 0f 20 10 c0 0f 00 00 00 20 |........ ...... |
00000110 00 00 05 00 00 00 00 00 00 00 00 00 00 20 d0 b6 |............. ..|
00000120 09 00 c0 0f 19 00 c0 0f 20 12 c0 0f 00 00 00 20 |........ ...... |
00000130 00 00 05 00 00 00 00 00 00 00 00 00 00 20 0b 16 |............. ..|
00000140 0a 00 c0 0f 1a 00 c0 0f 20 14 c0 0f 00 00 00 20 |........ ...... |
00000150 00 00 05 00 00 00 00 00 00 00 00 00 00 20 65 b7 |............. e.|
00000160 0b 00 c0 0f 1b 00 c0 0f 20 16 c0 0f 00 00 00 20 |........ ...... |
00000170 00 00 05 00 00 00 00 00 00 00 00 00 00 20 be 17 |............. ..|
00000180 0c 00 c0 0f 1c 00 c0 0f 20 18 c0 0f 00 00 00 20 |........ ...... |
00000190 00 00 05 00 00 00 00 00 00 00 00 00 00 20 ba b5 |............. ..|
000001a0 0d 00 c0 0f 1d 00 c0 0f 20 1a c0 0f 00 00 00 20 |........ ...... |
000001b0 00 00 05 00 00 00 00 00 00 00 00 00 00 20 61 15 |............. a.|
000001c0 0e 00 c0 0f 1e 00 c0 0f 20 1c c0 0f 00 00 00 20 |........ ...... |
000001d0 00 00 05 00 00 00 00 00 00 00 00 00 00 20 0f b4 |............. ..|
000001e0 0f 00 c0 0f 1f 00 c0 0f 20 1e c0 0f 00 00 00 20 |........ ...... |
000001f0 00 00 05 00 00 00 00 00 00 00 00 00 00 20 d4 14 |............. ..|
00000200
**** /dev/sde1 ****
00000000 00 00 c0 13 10 00 c0 13 20 00 c0 13 e0 04 00 20 |........ ...... |
00000010 00 00 05 00 00 00 00 00 00 00 00 00 00 20 e0 f7 |............. ..|
00000020 01 00 c0 13 11 00 c0 13 20 02 c0 13 00 00 00 20 |........ ...... |
00000030 00 00 05 00 00 00 00 00 00 00 00 00 00 20 4e 60 |............. N`|
00000040 02 00 c0 13 12 00 c0 13 20 04 c0 13 1f 04 00 20 |........ ...... |
00000050 00 00 05 00 00 00 00 00 00 00 00 00 00 20 55 d2 |............. U.|
00000060 03 00 c0 13 13 00 c0 13 20 06 c0 13 50 00 00 20 |........ ...P.. |
00000070 00 00 05 00 00 00 00 00 00 00 00 00 00 20 ea 70 |............. .p|
00000080 04 00 c0 13 14 00 c0 13 20 08 c0 13 00 00 00 20 |........ ...... |
00000090 00 00 05 00 00 00 00 00 00 00 00 00 00 20 ff c3 |............. ..|
000000a0 05 00 c0 13 15 00 c0 13 20 0a c0 13 00 00 00 20 |........ ...... |
000000b0 00 00 05 00 00 00 00 00 00 00 00 00 00 20 24 63 |............. $c|
000000c0 06 00 c0 13 16 00 c0 13 20 0c c0 13 00 00 00 20 |........ ...... |
000000d0 00 00 05 00 00 00 00 00 00 00 00 00 00 20 4a c2 |............. J.|
000000e0 07 00 c0 13 17 00 c0 13 20 0e c0 13 7f 00 00 20 |........ ...... |
000000f0 00 00 05 00 00 00 00 00 00 00 00 00 00 20 b8 ae |............. ..|
00000100 08 00 c0 13 18 00 c0 13 20 10 c0 13 00 00 00 20 |........ ...... |
00000110 00 00 05 00 00 00 00 00 00 00 00 00 00 20 41 c6 |............. A.|
00000120 09 00 c0 13 19 00 c0 13 20 12 c0 13 00 00 00 20 |........ ...... |
00000130 00 00 05 00 00 00 00 00 00 00 00 00 00 20 9a 66 |............. .f|
00000140 0a 00 c0 13 1a 00 c0 13 20 14 c0 13 00 00 00 20 |........ ...... |
00000150 00 00 05 00 00 00 00 00 00 00 00 00 00 20 f4 c7 |............. ..|
00000160 0b 00 c0 13 1b 00 c0 13 20 16 c0 13 00 00 00 20 |........ ...... |
00000170 00 00 05 00 00 00 00 00 00 00 00 00 00 20 2f 67 |............. /g|
00000180 0c 00 c0 13 1c 00 c0 13 20 18 c0 13 00 00 00 20 |........ ...... |
00000190 00 00 05 00 00 00 00 00 00 00 00 00 00 20 2b c5 |............. +.|
000001a0 0d 00 c0 13 1d 00 c0 13 20 1a c0 13 00 00 00 20 |........ ...... |
000001b0 00 00 05 00 00 00 00 00 00 00 00 00 00 20 f0 65 |............. .e|
000001c0 0e 00 c0 13 1e 00 c0 13 20 1c c0 13 00 00 00 20 |........ ...... |
000001d0 00 00 05 00 00 00 00 00 00 00 00 00 00 20 9e c4 |............. ..|
000001e0 0f 00 c0 13 1f 00 c0 13 20 1e c0 13 00 00 00 20 |........ ...... |
000001f0 00 00 05 00 00 00 00 00 00 00 00 00 00 20 45 64 |............. Ed|
00000200
**** /dev/sdf1 ****
00000000 00 00 c0 03 10 00 c0 03 20 00 c0 03 0c 08 00 20 |........ ...... |
00000010 00 00 05 00 00 00 00 00 00 00 00 00 00 20 a7 fc |............. ..|
00000020 01 00 c0 03 11 00 c0 03 20 02 c0 03 00 00 00 20 |........ ...... |
00000030 00 00 05 00 00 00 00 00 00 00 00 00 00 20 38 cd |............. 8.|
00000040 02 00 c0 03 12 00 c0 03 20 04 c0 03 00 00 00 20 |........ ...... |
00000050 00 00 05 00 00 00 00 00 00 00 00 00 00 20 56 6c |............. Vl|
00000060 03 00 c0 03 13 00 c0 03 20 06 c0 03 00 00 00 20 |........ ...... |
00000070 00 00 05 00 00 00 00 00 00 00 00 00 00 20 8d cc |............. ..|
00000080 04 00 c0 03 14 00 c0 03 20 08 c0 03 00 00 00 20 |........ ...... |
00000090 00 00 05 00 00 00 00 00 00 00 00 00 00 20 89 6e |............. .n|
000000a0 05 00 c0 03 15 00 c0 03 20 0a c0 03 00 00 00 20 |........ ...... |
000000b0 00 00 05 00 00 00 00 00 00 00 00 00 00 20 52 ce |............. R.|
000000c0 06 00 c0 03 16 00 c0 03 20 0c c0 03 00 00 00 20 |........ ...... |
000000d0 00 00 05 00 00 00 00 00 00 00 00 00 00 20 3c 6f |............. <o|
000000e0 07 00 c0 03 17 00 c0 03 20 0e c0 03 00 00 00 20 |........ ...... |
000000f0 00 00 05 00 00 00 00 00 00 00 00 00 00 20 e7 cf |............. ..|
00000100 08 00 c0 03 18 00 c0 03 20 10 c0 03 00 00 00 20 |........ ...... |
00000110 00 00 05 00 00 00 00 00 00 00 00 00 00 20 37 6b |............. 7k|
00000120 09 00 c0 03 19 00 c0 03 20 12 c0 03 00 00 00 20 |........ ...... |
00000130 00 00 05 00 00 00 00 00 00 00 00 00 00 20 ec cb |............. ..|
00000140 0a 00 c0 03 1a 00 c0 03 20 14 c0 03 00 00 00 20 |........ ...... |
00000150 00 00 05 00 00 00 00 00 00 00 00 00 00 20 82 6a |............. .j|
00000160 0b 00 c0 03 1b 00 c0 03 20 16 c0 03 00 00 00 20 |........ ...... |
00000170 00 00 05 00 00 00 00 00 00 00 00 00 00 20 59 ca |............. Y.|
00000180 0c 00 c0 03 1c 00 c0 03 20 18 c0 03 00 00 00 20 |........ ...... |
00000190 00 00 05 00 00 00 00 00 00 00 00 00 00 20 5d 68 |............. ]h|
000001a0 0d 00 c0 03 1d 00 c0 03 20 1a c0 03 00 00 00 20 |........ ...... |
000001b0 00 00 05 00 00 00 00 00 00 00 00 00 00 20 86 c8 |............. ..|
000001c0 0e 00 c0 03 1e 00 c0 03 20 1c c0 03 00 00 00 20 |........ ...... |
000001d0 00 00 05 00 00 00 00 00 00 00 00 00 00 20 e8 69 |............. .i|
000001e0 0f 00 c0 03 1f 00 c0 03 20 1e c0 03 00 00 00 20 |........ ...... |
000001f0 00 00 05 00 00 00 00 00 00 00 00 00 00 20 33 c9 |............. 3.|
00000200
**** /dev/sdg1 ****
00000000 00 00 00 10 00 00 00 10 00 00 00 10 ec 0c 00 00 |................|
00000010 00 00 00 00 00 00 00 00 00 00 00 00 00 00 47 0b |..............G.|
00000020 00 00 00 10 00 00 00 10 00 00 00 10 00 00 00 00 |................|
00000030 00 00 00 00 00 00 00 00 00 00 00 00 00 00 76 ad |..............v.|
00000040 00 00 00 10 00 00 00 10 00 00 00 10 1f 04 00 00 |................|
00000050 00 00 00 00 00 00 00 00 00 00 00 00 00 00 03 be |................|
00000060 00 00 00 10 00 00 00 10 00 00 00 10 50 00 00 00 |............P...|
00000070 00 00 00 00 00 00 00 00 00 00 00 00 00 00 67 bc |..............g.|
00000080 00 00 00 10 00 00 00 10 00 00 00 10 00 00 00 00 |................|
00000090 00 00 00 00 00 00 00 00 00 00 00 00 00 00 76 ad |..............v.|
000000a0 00 00 00 10 00 00 00 10 00 00 00 10 00 00 00 00 |................|
000000b0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 76 ad |..............v.|
000000c0 00 00 00 10 00 00 00 10 00 00 00 10 00 00 00 00 |................|
000000d0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 76 ad |..............v.|
000000e0 00 00 00 10 00 00 00 10 00 00 00 10 7f 00 00 00 |................|
000000f0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 5f 61 |.............._a|
00000100 00 00 00 10 00 00 00 10 00 00 00 10 00 00 00 00 |................|
00000110 00 00 00 00 00 00 00 00 00 00 00 00 00 00 76 ad |..............v.|
00000120 00 00 00 10 00 00 00 10 00 00 00 10 00 00 00 00 |................|
00000130 00 00 00 00 00 00 00 00 00 00 00 00 00 00 76 ad |..............v.|
00000140 00 00 00 10 00 00 00 10 00 00 00 10 00 00 00 00 |................|
00000150 00 00 00 00 00 00 00 00 00 00 00 00 00 00 76 ad |..............v.|
00000160 00 00 00 10 00 00 00 10 00 00 00 10 00 00 00 00 |................|
00000170 00 00 00 00 00 00 00 00 00 00 00 00 00 00 76 ad |..............v.|
00000180 00 00 00 10 00 00 00 10 00 00 00 10 00 00 00 00 |................|
00000190 00 00 00 00 00 00 00 00 00 00 00 00 00 00 76 ad |..............v.|
000001a0 00 00 00 10 00 00 00 10 00 00 00 10 00 00 00 00 |................|
000001b0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 76 ad |..............v.|
000001c0 00 00 00 10 00 00 00 10 00 00 00 10 00 00 00 00 |................|
000001d0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 76 ad |..............v.|
000001e0 00 00 00 10 00 00 00 10 00 00 00 10 00 00 00 00 |................|
000001f0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 76 ad |..............v.|
00000200
**** /dev/sdh1 ****
00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00000200
**** /dev/sdi1 ****
00000000 00 00 4e 78 80 00 4e 78 1d 00 4e 78 53 10 00 1d |..Nx..Nx..NxS...|
00000010 00 00 28 00 00 00 00 00 00 00 00 00 00 1d e4 2d |..(............-|
00000020 08 00 4e 78 88 00 4e 78 1d 10 4e 78 00 00 00 1d |..Nx..Nx..Nx....|
00000030 00 00 28 00 00 00 00 00 00 00 00 00 00 1d b6 80 |..(.............|
00000040 10 00 4e 78 90 00 4e 78 1d 20 4e 78 00 00 00 1d |..Nx..Nx. Nx....|
00000050 00 00 28 00 00 00 00 00 00 00 00 00 00 1d e1 e1 |..(.............|
00000060 18 00 4e 78 98 00 4e 78 1d 30 4e 78 00 00 00 1d |..Nx..Nx.0Nx....|
00000070 00 00 28 00 00 00 00 00 00 00 00 00 00 1d 77 88 |..(...........w.|
00000080 20 00 4e 78 a0 00 4e 78 1d 40 4e 78 00 00 00 1d | .Nx..Nx.@Nx....|
00000090 00 00 28 00 00 00 00 00 00 00 00 00 00 1d 57 f1 |..(...........W.|
000000a0 28 00 4e 78 a8 00 4e 78 1d 50 4e 78 00 00 00 1d |(.Nx..Nx.PNx....|
000000b0 00 00 28 00 00 00 00 00 00 00 00 00 00 1d c1 98 |..(.............|
000000c0 30 00 4e 78 b0 00 4e 78 1d 60 4e 78 00 00 00 1d |0.Nx..Nx.`Nx....|
000000d0 00 00 28 00 00 00 00 00 00 00 00 00 00 1d 96 f9 |..(.............|
000000e0 38 00 4e 78 b8 00 4e 78 1d 70 4e 78 00 00 00 1d |8.Nx..Nx.pNx....|
000000f0 00 00 28 00 00 00 00 00 00 00 00 00 00 1d 00 90 |..(.............|
00000100 40 00 4e 78 c0 00 4e 78 1d 80 4e 78 00 00 00 1d |@.Nx..Nx..Nx....|
00000110 00 00 28 00 00 00 00 00 00 00 00 00 00 1d ce d9 |..(.............|
00000120 48 00 4e 78 c8 00 4e 78 1d 90 4e 78 00 00 00 1d |H.Nx..Nx..Nx....|
00000130 00 00 28 00 00 00 00 00 00 00 00 00 00 1d 58 b0 |..(...........X.|
00000140 50 00 4e 78 d0 00 4e 78 1d a0 4e 78 00 00 00 1d |P.Nx..Nx..Nx....|
00000150 00 00 28 00 00 00 00 00 00 00 00 00 00 1d 0f d1 |..(.............|
00000160 58 00 4e 78 d8 00 4e 78 1d b0 4e 78 00 00 00 1d |X.Nx..Nx..Nx....|
00000170 00 00 28 00 00 00 00 00 00 00 00 00 00 1d 99 b8 |..(.............|
00000180 60 00 4e 78 e0 00 4e 78 1d c0 4e 78 00 00 00 1d |`.Nx..Nx..Nx....|
00000190 00 00 28 00 00 00 00 00 00 00 00 00 00 1d b9 c1 |..(.............|
000001a0 68 00 4e 78 e8 00 4e 78 1d d0 4e 78 00 00 00 1d |h.Nx..Nx..Nx....|
000001b0 00 00 28 00 00 00 00 00 00 00 00 00 00 1d 2f a8 |..(.........../.|
000001c0 70 00 4e 78 f0 00 4e 78 1d e0 4e 78 00 00 00 1d |p.Nx..Nx..Nx....|
000001d0 00 00 28 00 00 00 00 00 00 00 00 00 00 1d 78 c9 |..(...........x.|
000001e0 78 00 4e 78 f8 00 4e 78 1d f0 4e 78 00 00 00 1d |x.Nx..Nx..Nx....|
000001f0 00 00 28 00 00 00 00 00 00 00 00 00 00 1d ee a0 |..(.............|
00000200
**** /dev/sdj1 ****
00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00000200
**** /dev/sdk1 ****
00000000 00 00 69 94 c0 00 69 94 9d 00 69 94 63 00 00 9d |..i...i...i.c...|
00000010 00 00 3c 00 00 00 00 00 00 00 00 00 00 9d f5 3c |..<............<|
00000020 0c 00 69 94 cc 00 69 94 9d 18 69 94 00 00 00 9d |..i...i...i.....|
00000030 00 00 3c 00 00 00 00 00 00 00 00 00 00 9d aa 34 |..<............4|
00000040 18 00 69 94 d8 00 69 94 9d 30 69 94 f8 20 00 9d |..i...i..0i.. ..|
00000050 00 00 3c 00 00 00 00 00 00 00 00 00 00 9d d7 73 |..<............s|
00000060 14 00 69 94 d4 00 69 94 9d 28 69 94 ba 00 00 9d |..i...i..(i.....|
00000070 00 00 3c 00 00 00 00 00 00 00 00 00 00 9d 0d b0 |..<.............|
00000080 30 00 69 94 f0 00 69 94 9d 60 69 94 00 00 00 9d |0.i...i..`i.....|
00000090 00 00 3c 00 00 00 00 00 00 00 00 00 00 9d b5 f3 |..<.............|
000000a0 3c 00 69 94 fc 00 69 94 9d 78 69 94 00 00 00 9d |<.i...i..xi.....|
000000b0 00 00 3c 00 00 00 00 00 00 00 00 00 00 9d 68 20 |..<...........h |
000000c0 28 00 69 94 e8 00 69 94 9d 50 69 94 00 00 00 9d |(.i...i..Pi.....|
000000d0 00 00 3c 00 00 00 00 00 00 00 00 00 00 9d 9a ff |..<.............|
000000e0 24 00 69 94 e4 00 69 94 9d 48 69 94 df 00 00 9d |$.i...i..Hi.....|
000000f0 00 00 3c 00 00 00 00 00 00 00 00 00 00 9d 12 02 |..<.............|
00000100 60 00 69 94 a0 00 69 94 9d c0 69 94 00 00 00 9d |`.i...i...i.....|
00000110 00 00 3c 00 00 00 00 00 00 00 00 00 00 9d ee cf |..<.............|
00000120 6c 00 69 94 ac 00 69 94 9d d8 69 94 00 00 00 9d |l.i...i...i.....|
00000130 00 00 3c 00 00 00 00 00 00 00 00 00 00 9d 33 1c |..<...........3.|
00000140 78 00 69 94 b8 00 69 94 9d f0 69 94 00 00 00 9d |x.i...i...i.....|
00000150 00 00 3c 00 00 00 00 00 00 00 00 00 00 9d c1 c3 |..<.............|
00000160 74 00 69 94 b4 00 69 94 9d e8 69 94 00 00 00 9d |t.i...i...i.....|
00000170 00 00 3c 00 00 00 00 00 00 00 00 00 00 9d 1c 10 |..<.............|
00000180 50 00 69 94 90 00 69 94 9d a0 69 94 00 00 00 9d |P.i...i...i.....|
00000190 00 00 3c 00 00 00 00 00 00 00 00 00 00 9d 2c db |..<...........,.|
000001a0 5c 00 69 94 9c 00 69 94 9d b8 69 94 00 00 00 9d |\.i...i...i.....|
000001b0 00 00 3c 00 00 00 00 00 00 00 00 00 00 9d f1 08 |..<.............|
000001c0 48 00 69 94 88 00 69 94 9d 90 69 94 00 00 00 9d |H.i...i...i.....|
000001d0 00 00 3c 00 00 00 00 00 00 00 00 00 00 9d 03 d7 |..<.............|
000001e0 44 00 69 94 84 00 69 94 9d 88 69 94 00 00 00 9d |D.i...i...i.....|
000001f0 00 00 3c 00 00 00 00 00 00 00 00 00 00 9d de 04 |..<.............|
00000200
**** /dev/sdm1 ****
00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00000200
btw, /dev/md1 is fine 100%
the drives for /dev/md0 --> 'a', 'c', 'd', 'h', 'i', 'j'
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: reaid problem at reboot
2011-06-22 1:50 ` william L'Heureux
@ 2011-06-22 2:30 ` Phil Turmel
2011-06-22 3:34 ` william L'Heureux
0 siblings, 1 reply; 11+ messages in thread
From: Phil Turmel @ 2011-06-22 2:30 UTC (permalink / raw)
To: william L'Heureux; +Cc: linux-raid
Hi William,
Progress!
On 06/21/2011 09:50 PM, william L'Heureux wrote:
> for x in /dev/sd[abcdefghijkm]1 ; do echo "**** $x ****" ; dd if=$x skip=2056 count=2 2>/dev/null |strings ; done
> **** /dev/sda1 ****
> **** /dev/sdb1 ****
[...]
> **** /dev/sdc1 ****
> **** /dev/sdd1 ****
> !HV]3$xK
> M1N+>
> vgRQID6 {
> *= "RUodpI-jXin-D<
> s-fC3R-$hjA=hQ4l-41dAaJ"
> seqnoV
> ctates =0["RESIZEABLE", "RE7
> ", 2WRIDE"]
> jags = []
> extent_
> ze - 81)2
> mqx_lv = 0
> max_pv = F
> phisicql_v
> 6wmes {
> pv0 {
> &"V5s1yI=XD3T-FXlB-VDYd-tUfo-eI#
> -hXAfVW2
> defice = "/dev/md1"
> atuc = K"AL\
> @ATABLE"]
> flags =
> duv_syze -V2814057984
> pe_sta
> = %12
> `e_c
> zjt = 953864
> pv6[{
> it = 2BzZs
> n-uC9b-MQne-caBo-
> Np3-)xkA=NnGeUA"
> device = "/dev
> stqtus0E"["ALLOCATABLE"]
> ags0= [M
> pize = 7814057984Z
> e_sdart0= 5!q
> pe_count = 95386
> **** /dev/sde1 ****
> **** /dev/sdf1 ****
> **** /dev/sdg1 ****
[...]
> **** /dev/sdh1 ****
> **** /dev/sdi1 ****
> "RiodpI-jXin-
> dAaJ"
> seqn
> a82j
> SIfEABLE", "R
> extent
> max_pv =
> pv0 {
> B-jDYd-tUfo-e
> /dev/md1"
> ABpE"]
> flags
> 40 7984
> pe_st
> 53864
> C9^-MQne-caBo
> deJice = "/de
> q82B
> ALpOCATABLE"]
> 781405798
> 7m!_
> _cSunt = 9538
> **** /dev/sdj1 ****
> LVM2 x[5A%r0N*>
> vgRAID60 {
> id = "RUodpI-jXin-DEjs-fS3R-4hjA-hQ4l-41dAaJ"
> seqno = 2
> status = ["RESIZEABLE", "READ", "WRITE"]
> flags = []
> extent_size = 8192
> max_lv = 0
> max_pv = 0
> physical_volumes {
> pv0 {
> id = "V5c1yI-XD3D-FXlB-VDYd-tUfo-eIUC-hXQfVW"
> device = "/dev/md1"
> status = ["ALLOCATABLE"]
> flags = []
> dev_size = 7814057984
> pe_start = 512
> pe_count = 953864
> pv1 {
> id = "BzZcan-uC9b-MQne-caBo-tYp3-9xkA-NnGuBE"
> device = "/dev/md2"
> status = ["ALLOCATABLE"]
> flags = []
> dev_size = 7814057984
> pe_start = 512
> pe_count = 953864
> **** /dev/sdk1 ****
[...]
> **** /dev/sdm1 ****
>
>
> for x in /dev/sd[abcdefghijkm]1 ; do echo "**** $x ****" ; dd if=$x skip=2048 count=1 2>/dev/null |hexdump -C ; done
> **** /dev/sda1 ****
[...] (These weren't helpful, after all.)
> btw, /dev/md1 is fine 100%
Good. That simplifies the rest.
> the drives for /dev/md0 --> 'a', 'c', 'd', 'h', 'i', 'j'
Based on the above, 'j' is almost certainly the first disk in md0, and I suspect 'i' is the 'P' parity drive, and 'd' is the 'Q' parity drive.
Stop LVM, and stop md0, then re-create (re-assembly won't fix this):
mdadm --create --assume-clean /dev/md0 --raid-devices=6 --level=6 --chunk=128 /dev/sd{j,a,c,h,i,d}1
(You must not use the [] syntax here..., and "--assume-clean" is vital.)
restart LVM, then "fsck -n" to test.
If not yet good, shuffle 'a', 'c', & 'h'.
If still not good, swap 'i' & 'd', then try the combinations of 'a', 'c', and 'h' again.
Please let us know what combination, if any, works. Absolutely do *not* attempt to mount until "fsck -n" reports no problems, or not many problems.
Phil
^ permalink raw reply [flat|nested] 11+ messages in thread
* RE: reaid problem at reboot
2011-06-22 2:30 ` Phil Turmel
@ 2011-06-22 3:34 ` william L'Heureux
2011-06-22 3:53 ` Phil Turmel
0 siblings, 1 reply; 11+ messages in thread
From: william L'Heureux @ 2011-06-22 3:34 UTC (permalink / raw)
To: philip; +Cc: linux-raid
/dev/sdj1 cant be first, its /dev/sda1.
My friend made a python script. take a look.
cat rebuild.py
#!/usr/bin/env python
# /dev/sdj1 /dev/sdc1 /dev/sda1 /dev/sdd1 /dev/sdh1 /dev/sdi1
import os;
import time;
from itertools import permutations;
#aLetters = ( 'a', 'c', 'd', 'h', 'i', 'j' );
aLetters = ( 'j', 'c', 'h' );
j = [];
for i in permutations(aLetters):
if i[0:3] == j[0:3]: continue;
j = i;
time.sleep(0.1);
os.system("vgchange -an vgRAID60 2>/dev/null >/dev/null");
os.system("mdadm -S /dev/md127 2>/dev/null >/dev/null");
os.system("mdadm -S /dev/md126 2>/dev/null >/dev/null");
os.system("mdadm -S /dev/md0 2>/dev/null >/dev/null");
(hStdin, hStdout) = os.popen4("mdadm --create /dev/md0 --level=6 --raid-devices=6 --assume-clean --chunk=128 /dev/sda1 /dev/sd%s1 /dev/sd%s1 /dev/sd%s1 missing missing" % (i[0:3]));
hStdin.write("y\n");
hStdin.close();
hStdout.close();
time.sleep(0.5);
(hStdin, hStdout) = os.popen4("pvck /dev/md0");
sOutput = hStdout.read();
hStdin.close();
hStdout.close();
print("/dev/sdj1 /dev/sd%s1 /dev/sd%s1 /dev/sd%s1 missing missing" % (i[0:3]));
fRAID = open("/dev/md0", 'r');
try:
fRAID.seek(4608);
sLVM = fRAID.read(128);
fRAID.close();
except:
fRAID.close();
continue;
print(sLVM);
if sOutput.find("Found label") != -1:
os.system("/bin/bash");
# os.system("vgchange -ay vgRAID60 2>/dev/null >/dev/null");
# os.system("vgmknodes");
# iExit = os.system("e2fsck -n /dev/vgRAID60/data0");
# if iExit == 0:
# print("/dev/sd%s1 /dev/sd%s1 /dev/sd%s1 /dev/sd%s1 missing missing" % (i));
i can see the data with the combinatin below, but there is a lot of errors, sounds like a rebuild was started(beghining of the disk) then was canceled
we made a snapshot, run a e2fsck on it and im still(not now, gotta sleep) add 1 missing or 2. after the e2fsck i mount the snap, look at the data, check corruption, rince and repeat with another combination of missing and not. i rate the ones with less corruption
the only e2fsck working is with --->>> a,j,c,h,i,d
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: reaid problem at reboot
2011-06-22 3:34 ` william L'Heureux
@ 2011-06-22 3:53 ` Phil Turmel
0 siblings, 0 replies; 11+ messages in thread
From: Phil Turmel @ 2011-06-22 3:53 UTC (permalink / raw)
To: william L'Heureux; +Cc: linux-raid
On 06/21/2011 11:34 PM, william L'Heureux wrote:
>
> /dev/sdj1 cant be first, its /dev/sda1.
'j' had the lvm metadata in the right spot. It was first. If it's not now, then your partial rebuild scrambled it. I don't think I have any more expertise to offer at this point.
> My friend made a python script. take a look.
>
> cat rebuild.py
[...]
Nice to automate the search, especially with many combinations.
> i can see the data with the combinatin below, but there is a lot of errors, sounds like a rebuild was started(beghining of the disk) then was canceled
That hurts.
> we made a snapshot, run a e2fsck on it and im still(not now, gotta sleep) add 1 missing or 2. after the e2fsck i mount the snap, look at the data, check corruption, rince and repeat with another combination of missing and not. i rate the ones with less corruption
>
> the only e2fsck working is with --->>> a,j,c,h,i,d
Good luck.
Phil
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2011-06-22 3:53 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-06-21 13:14 reaid problem at reboot william L'Heureux
2011-06-21 20:18 ` Phil Turmel
2011-06-21 20:39 ` william L'Heureux
2011-06-21 20:50 ` william L'Heureux
2011-06-21 21:58 ` Phil Turmel
2011-06-22 1:00 ` william L'Heureux
2011-06-22 1:34 ` Phil Turmel
2011-06-22 1:50 ` william L'Heureux
2011-06-22 2:30 ` Phil Turmel
2011-06-22 3:34 ` william L'Heureux
2011-06-22 3:53 ` Phil Turmel
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).