linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* strange problem with my raid5
@ 2011-03-30 12:34 hank peng
  2011-03-31 16:24 ` Simon McNair
  0 siblings, 1 reply; 5+ messages in thread
From: hank peng @ 2011-03-30 12:34 UTC (permalink / raw)
  To: linux-raid

Hi,all:
I created a raid5 array which consists of 15 disks, before recovering
is done, a power failure event occured. After power is recovered, the
machine box started successfully but "cat /proc/mdstat" gave no
message, previously created raid5 was gone. I check kernel messages,
it is as follows:

<snip>
bonding: bond0: enslaving eth1 as a backup interface with a down link.
svc: failed to register lockdv1 RPC service (errno 97).
rpc.nfsd used greatest stack depth: 5440 bytes left
md: md1 stopped.
iSCSI Enterprise Target Software - version 1.4.1
</snip>

In normal case, md1 should bind its disks after printing "md: md1
stopped", then what happened in this cituation?
BTW, my kernel version is 2.6.31.6.


-- 
The simplest is not all best but the best is surely the simplest!

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: strange problem with my raid5
  2011-03-30 12:34 strange problem with my raid5 hank peng
@ 2011-03-31 16:24 ` Simon McNair
  2011-04-01  0:19   ` hank peng
  0 siblings, 1 reply; 5+ messages in thread
From: Simon McNair @ 2011-03-31 16:24 UTC (permalink / raw)
  To: hank peng; +Cc: linux-raid

I think the normal thing to try in this situation is:

  mdadm --assemble --scan

and if that doesn't work, people normally ask for:
  mdadm -E /dev/sd?? for each appropriate drive which should be in the array

have a look at dmesg too ?

I don't know much about md, I just lurk so apologies if you already know 
this.

cheers
Simon

On 30/03/2011 13:34, hank peng wrote:
> Hi,all:
> I created a raid5 array which consists of 15 disks, before recovering
> is done, a power failure event occured. After power is recovered, the
> machine box started successfully but "cat /proc/mdstat" gave no
> message, previously created raid5 was gone. I check kernel messages,
> it is as follows:
>
> <snip>
> bonding: bond0: enslaving eth1 as a backup interface with a down link.
> svc: failed to register lockdv1 RPC service (errno 97).
> rpc.nfsd used greatest stack depth: 5440 bytes left
> md: md1 stopped.
> iSCSI Enterprise Target Software - version 1.4.1
> </snip>
>
> In normal case, md1 should bind its disks after printing "md: md1
> stopped", then what happened in this cituation?
> BTW, my kernel version is 2.6.31.6.
>
>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: strange problem with my raid5
  2011-03-31 16:24 ` Simon McNair
@ 2011-04-01  0:19   ` hank peng
  2011-04-01  7:22     ` Simon McNair
       [not found]     ` <4D957D04.4040503@gmail.com>
  0 siblings, 2 replies; 5+ messages in thread
From: hank peng @ 2011-04-01  0:19 UTC (permalink / raw)
  To: simonmcnair; +Cc: linux-raid

thanks for reply, I have other information to  add.
I created 3 raid5 array, then I created 6 iscsi LUN on them, each
raid5 had two LUNs. And then I exported them to Windows side. On
Windows side, I format them using NTFS filesystem.
On Linux side, there are some information as follows:

#fdisk -l
Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1      243199  1953495903+   7  HPFS/NTFS

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1      243199  1953495903+   7  HPFS/NTFS

Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdc doesn't contain a valid partition table

Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdd doesn't contain a valid partition table

Disk /dev/sdf: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdf1               1      243199  1953495903+   7  HPFS/NTFS

Disk /dev/sdg: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdg1               1      243199  1953495903+   7  HPFS/NTFS

Disk /dev/sde: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sde doesn't contain a valid partition table

Disk /dev/sdj: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdj doesn't contain a valid partition table

Disk /dev/sdi: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdi doesn't contain a valid partition table

Disk /dev/sdk: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdk doesn't contain a valid partition table

Disk /dev/sdh: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdh doesn't contain a valid partition table

Disk /dev/sdl: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdl1               1      243199  1953495903+   7  HPFS/NTFS

Disk /dev/sdm: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdm1               1      243199  1953495903+   7  HPFS/NTFS

Disk /dev/sdn: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdn doesn't contain a valid partition table

Disk /dev/sdo: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdo doesn't contain a valid partition table

Disk /dev/sdp: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4]
unused devices: <none>
root@Dahua_Storage:~# cat /etc/mdadm.conf
DEVICE /dev/sd*
ARRAY /dev/md3 level=raid5 num-devices=5
UUID=2d3ac8ef:2dbe2469:b31e3c87:77c5769c
   devices=/dev/sdg1,/dev/sdg,/dev/sdf1,/dev/sdf,/dev/sde,/dev/sdd,/dev/sdc
ARRAY /dev/md1 level=raid5 num-devices=5
UUID=9462a7df:31fca040:023819d9:dbf71832
   devices=/dev/sdm1,/dev/sdm,/dev/sdl1,/dev/sdl,/dev/sdk,/dev/sdj,/dev/sdi
ARRAY /dev/md2 level=raid5 num-devices=5
UUID=5dbc2bdc:9173d426:21a1b5c2:f8b2768a
   devices=/dev/sdp,/dev/sdo,/dev/sdn,/dev/sdb1,/dev/sdb,/dev/sda1,/dev/sda



There are two strange points:
1. As you see, there are "sdg1" "sdf1" "sdm1" "sdl1" "sdb1" "sda1".
These partitions should not exist.
2. The content of /etc/mdadm.conf is abnormal, "sdg1" "sdf1" "sdm1"
"sdl1" "sdb1" "sda1" should not be scanned and included.







2011/4/1 Simon McNair <simonmcnair@gmail.com>:
> I think the normal thing to try in this situation is:
>
>  mdadm --assemble --scan
>
> and if that doesn't work, people normally ask for:
>  mdadm -E /dev/sd?? for each appropriate drive which should be in the array
>
> have a look at dmesg too ?
>
> I don't know much about md, I just lurk so apologies if you already know
> this.
>
> cheers
> Simon
>
> On 30/03/2011 13:34, hank peng wrote:
>>
>> Hi,all:
>> I created a raid5 array which consists of 15 disks, before recovering
>> is done, a power failure event occured. After power is recovered, the
>> machine box started successfully but "cat /proc/mdstat" gave no
>> message, previously created raid5 was gone. I check kernel messages,
>> it is as follows:
>>
>> <snip>
>> bonding: bond0: enslaving eth1 as a backup interface with a down link.
>> svc: failed to register lockdv1 RPC service (errno 97).
>> rpc.nfsd used greatest stack depth: 5440 bytes left
>> md: md1 stopped.
>> iSCSI Enterprise Target Software - version 1.4.1
>> </snip>
>>
>> In normal case, md1 should bind its disks after printing "md: md1
>> stopped", then what happened in this cituation?
>> BTW, my kernel version is 2.6.31.6.
>>
>>
>



-- 
The simplest is not all best but the best is surely the simplest!
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: strange problem with my raid5
  2011-04-01  0:19   ` hank peng
@ 2011-04-01  7:22     ` Simon McNair
       [not found]     ` <4D957D04.4040503@gmail.com>
  1 sibling, 0 replies; 5+ messages in thread
From: Simon McNair @ 2011-04-01  7:22 UTC (permalink / raw)
  To: hank peng; +Cc: linux-raid

My guess is that you've exported the physical disks you were using in MD 
as your iscsi luns, rather than creating a files on your formatted md 
device and exporting that file as a lun.  The partitions on these 
probably were created by Windows.

Can you post your iscsi config, the mdadm -E's that I asked for in the 
first place and the dmesg info ?

the partitions you've 'found' are all ntfs partitions, but I can't 
understand how they can get in to the mdadm.conf.  As far as I am aware 
mdadm.conf is always hand crafted (apart from the original which 
probably gets put there by apt).  Can anyone else on the list 
confirm/deny this ?

I'm guessing that this was clean and a proof of concept and that there 
is no dataloss.  can you confirm ?

cheers
Simon

On 01/04/2011 01:19, hank peng wrote:
> thanks for reply, I have other information to  add.
> I created 3 raid5 array, then I created 6 iscsi LUN on them, each
> raid5 had two LUNs. And then I exported them to Windows side. On
> Windows side, I format them using NTFS filesystem.
> On Linux side, there are some information as follows:
>
> #fdisk -l
> Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
> 255 heads, 63 sectors/track, 121601 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
>
>     Device Boot      Start         End      Blocks   Id  System
> /dev/sda1               1      243199  1953495903+   7  HPFS/NTFS
>
> Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
> 255 heads, 63 sectors/track, 121601 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
>
>     Device Boot      Start         End      Blocks   Id  System
> /dev/sdb1               1      243199  1953495903+   7  HPFS/NTFS
>
> Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
> 255 heads, 63 sectors/track, 121601 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
>
> Disk /dev/sdc doesn't contain a valid partition table
>
> Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes
> 255 heads, 63 sectors/track, 121601 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
>
> Disk /dev/sdd doesn't contain a valid partition table
>
> Disk /dev/sdf: 1000.2 GB, 1000204886016 bytes
> 255 heads, 63 sectors/track, 121601 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
>
>     Device Boot      Start         End      Blocks   Id  System
> /dev/sdf1               1      243199  1953495903+   7  HPFS/NTFS
>
> Disk /dev/sdg: 1000.2 GB, 1000204886016 bytes
> 255 heads, 63 sectors/track, 121601 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
>
>     Device Boot      Start         End      Blocks   Id  System
> /dev/sdg1               1      243199  1953495903+   7  HPFS/NTFS
>
> Disk /dev/sde: 1000.2 GB, 1000204886016 bytes
> 255 heads, 63 sectors/track, 121601 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
>
> Disk /dev/sde doesn't contain a valid partition table
>
> Disk /dev/sdj: 1000.2 GB, 1000204886016 bytes
> 255 heads, 63 sectors/track, 121601 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
>
> Disk /dev/sdj doesn't contain a valid partition table
>
> Disk /dev/sdi: 1000.2 GB, 1000204886016 bytes
> 255 heads, 63 sectors/track, 121601 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
>
> Disk /dev/sdi doesn't contain a valid partition table
>
> Disk /dev/sdk: 1000.2 GB, 1000204886016 bytes
> 255 heads, 63 sectors/track, 121601 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
>
> Disk /dev/sdk doesn't contain a valid partition table
>
> Disk /dev/sdh: 1000.2 GB, 1000204886016 bytes
> 255 heads, 63 sectors/track, 121601 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
>
> Disk /dev/sdh doesn't contain a valid partition table
>
> Disk /dev/sdl: 1000.2 GB, 1000204886016 bytes
> 255 heads, 63 sectors/track, 121601 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
>
>     Device Boot      Start         End      Blocks   Id  System
> /dev/sdl1               1      243199  1953495903+   7  HPFS/NTFS
>
> Disk /dev/sdm: 1000.2 GB, 1000204886016 bytes
> 255 heads, 63 sectors/track, 121601 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
>
>     Device Boot      Start         End      Blocks   Id  System
> /dev/sdm1               1      243199  1953495903+   7  HPFS/NTFS
>
> Disk /dev/sdn: 1000.2 GB, 1000204886016 bytes
> 255 heads, 63 sectors/track, 121601 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
>
> Disk /dev/sdn doesn't contain a valid partition table
>
> Disk /dev/sdo: 1000.2 GB, 1000204886016 bytes
> 255 heads, 63 sectors/track, 121601 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
>
> Disk /dev/sdo doesn't contain a valid partition table
>
> Disk /dev/sdp: 1000.2 GB, 1000204886016 bytes
> 255 heads, 63 sectors/track, 121601 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
>
> # cat /proc/mdstat
> Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4]
> unused devices:<none>
> root@Dahua_Storage:~# cat /etc/mdadm.conf
> DEVICE /dev/sd*
> ARRAY /dev/md3 level=raid5 num-devices=5
> UUID=2d3ac8ef:2dbe2469:b31e3c87:77c5769c
>     devices=/dev/sdg1,/dev/sdg,/dev/sdf1,/dev/sdf,/dev/sde,/dev/sdd,/dev/sdc
> ARRAY /dev/md1 level=raid5 num-devices=5
> UUID=9462a7df:31fca040:023819d9:dbf71832
>     devices=/dev/sdm1,/dev/sdm,/dev/sdl1,/dev/sdl,/dev/sdk,/dev/sdj,/dev/sdi
> ARRAY /dev/md2 level=raid5 num-devices=5
> UUID=5dbc2bdc:9173d426:21a1b5c2:f8b2768a
>     devices=/dev/sdp,/dev/sdo,/dev/sdn,/dev/sdb1,/dev/sdb,/dev/sda1,/dev/sda
>
>
>
> There are two strange points:
> 1. As you see, there are "sdg1" "sdf1" "sdm1" "sdl1" "sdb1" "sda1".
> These partitions should not exist.
> 2. The content of /etc/mdadm.conf is abnormal, "sdg1" "sdf1" "sdm1"
> "sdl1" "sdb1" "sda1" should not be scanned and included.
>
>
>
>
>
>
>
> 2011/4/1 Simon McNair<simonmcnair@gmail.com>:
>> I think the normal thing to try in this situation is:
>>
>>   mdadm --assemble --scan
>>
>> and if that doesn't work, people normally ask for:
>>   mdadm -E /dev/sd?? for each appropriate drive which should be in the array
>>
>> have a look at dmesg too ?
>>
>> I don't know much about md, I just lurk so apologies if you already know
>> this.
>>
>> cheers
>> Simon
>>
>> On 30/03/2011 13:34, hank peng wrote:
>>> Hi,all:
>>> I created a raid5 array which consists of 15 disks, before recovering
>>> is done, a power failure event occured. After power is recovered, the
>>> machine box started successfully but "cat /proc/mdstat" gave no
>>> message, previously created raid5 was gone. I check kernel messages,
>>> it is as follows:
>>>
>>> <snip>
>>> bonding: bond0: enslaving eth1 as a backup interface with a down link.
>>> svc: failed to register lockdv1 RPC service (errno 97).
>>> rpc.nfsd used greatest stack depth: 5440 bytes left
>>> md: md1 stopped.
>>> iSCSI Enterprise Target Software - version 1.4.1
>>> </snip>
>>>
>>> In normal case, md1 should bind its disks after printing "md: md1
>>> stopped", then what happened in this cituation?
>>> BTW, my kernel version is 2.6.31.6.
>>>
>>>
>
>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: strange problem with my raid5
       [not found]     ` <4D957D04.4040503@gmail.com>
@ 2011-04-01  7:26       ` Simon McNair
  0 siblings, 0 replies; 5+ messages in thread
From: Simon McNair @ 2011-04-01  7:26 UTC (permalink / raw)
  To: hank peng; +Cc: linux-raid

for reference, this is the guide I use for setting up iscsi using flat 
files for the disks.
http://www.howtoforge.com/using-iscsi-on-debian-lenny-initiator-and-target

can you confirm, in essence, that your set-up is similar to this ?

Simon

On 01/04/2011 08:21, Simon McNair wrote:
> My guess is that you've exported the physical disks you were using in 
> MD as your iscsi luns, rather than creating a files on your formatted 
> md device and exporting that file as a lun.
>
> Can you post your iscsi config, the mdadm -E's that I asked for in the 
> first place and the dmesg info ?
>
> the partitions you've 'found' are all ntfs partitions, but I can't 
> understand how they can get in to the mdadm.conf.  As far as I am 
> aware mdadm.conf is always hand crafted (apart from the original which 
> probably gets put there by apt).
>
> I'm guessing that this was clean and a proof of concept and that there 
> is no dataloss.  can you confirm ?
>
> cheers
> Simon
>
> On 01/04/2011 01:19, hank peng wrote:
>> thanks for reply, I have other information to  add.
>> I created 3 raid5 array, then I created 6 iscsi LUN on them, each
>> raid5 had two LUNs. And then I exported them to Windows side. On
>> Windows side, I format them using NTFS filesystem.
>> On Linux side, there are some information as follows:
>>
>> #fdisk -l
>> Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
>> 255 heads, 63 sectors/track, 121601 cylinders
>> Units = cylinders of 16065 * 512 = 8225280 bytes
>>
>>     Device Boot      Start         End      Blocks   Id  System
>> /dev/sda1               1      243199  1953495903+   7  HPFS/NTFS
>>
>> Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
>> 255 heads, 63 sectors/track, 121601 cylinders
>> Units = cylinders of 16065 * 512 = 8225280 bytes
>>
>>     Device Boot      Start         End      Blocks   Id  System
>> /dev/sdb1               1      243199  1953495903+   7  HPFS/NTFS
>>
>> Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
>> 255 heads, 63 sectors/track, 121601 cylinders
>> Units = cylinders of 16065 * 512 = 8225280 bytes
>>
>> Disk /dev/sdc doesn't contain a valid partition table
>>
>> Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes
>> 255 heads, 63 sectors/track, 121601 cylinders
>> Units = cylinders of 16065 * 512 = 8225280 bytes
>>
>> Disk /dev/sdd doesn't contain a valid partition table
>>
>> Disk /dev/sdf: 1000.2 GB, 1000204886016 bytes
>> 255 heads, 63 sectors/track, 121601 cylinders
>> Units = cylinders of 16065 * 512 = 8225280 bytes
>>
>>     Device Boot      Start         End      Blocks   Id  System
>> /dev/sdf1               1      243199  1953495903+   7  HPFS/NTFS
>>
>> Disk /dev/sdg: 1000.2 GB, 1000204886016 bytes
>> 255 heads, 63 sectors/track, 121601 cylinders
>> Units = cylinders of 16065 * 512 = 8225280 bytes
>>
>>     Device Boot      Start         End      Blocks   Id  System
>> /dev/sdg1               1      243199  1953495903+   7  HPFS/NTFS
>>
>> Disk /dev/sde: 1000.2 GB, 1000204886016 bytes
>> 255 heads, 63 sectors/track, 121601 cylinders
>> Units = cylinders of 16065 * 512 = 8225280 bytes
>>
>> Disk /dev/sde doesn't contain a valid partition table
>>
>> Disk /dev/sdj: 1000.2 GB, 1000204886016 bytes
>> 255 heads, 63 sectors/track, 121601 cylinders
>> Units = cylinders of 16065 * 512 = 8225280 bytes
>>
>> Disk /dev/sdj doesn't contain a valid partition table
>>
>> Disk /dev/sdi: 1000.2 GB, 1000204886016 bytes
>> 255 heads, 63 sectors/track, 121601 cylinders
>> Units = cylinders of 16065 * 512 = 8225280 bytes
>>
>> Disk /dev/sdi doesn't contain a valid partition table
>>
>> Disk /dev/sdk: 1000.2 GB, 1000204886016 bytes
>> 255 heads, 63 sectors/track, 121601 cylinders
>> Units = cylinders of 16065 * 512 = 8225280 bytes
>>
>> Disk /dev/sdk doesn't contain a valid partition table
>>
>> Disk /dev/sdh: 1000.2 GB, 1000204886016 bytes
>> 255 heads, 63 sectors/track, 121601 cylinders
>> Units = cylinders of 16065 * 512 = 8225280 bytes
>>
>> Disk /dev/sdh doesn't contain a valid partition table
>>
>> Disk /dev/sdl: 1000.2 GB, 1000204886016 bytes
>> 255 heads, 63 sectors/track, 121601 cylinders
>> Units = cylinders of 16065 * 512 = 8225280 bytes
>>
>>     Device Boot      Start         End      Blocks   Id  System
>> /dev/sdl1               1      243199  1953495903+   7  HPFS/NTFS
>>
>> Disk /dev/sdm: 1000.2 GB, 1000204886016 bytes
>> 255 heads, 63 sectors/track, 121601 cylinders
>> Units = cylinders of 16065 * 512 = 8225280 bytes
>>
>>     Device Boot      Start         End      Blocks   Id  System
>> /dev/sdm1               1      243199  1953495903+   7  HPFS/NTFS
>>
>> Disk /dev/sdn: 1000.2 GB, 1000204886016 bytes
>> 255 heads, 63 sectors/track, 121601 cylinders
>> Units = cylinders of 16065 * 512 = 8225280 bytes
>>
>> Disk /dev/sdn doesn't contain a valid partition table
>>
>> Disk /dev/sdo: 1000.2 GB, 1000204886016 bytes
>> 255 heads, 63 sectors/track, 121601 cylinders
>> Units = cylinders of 16065 * 512 = 8225280 bytes
>>
>> Disk /dev/sdo doesn't contain a valid partition table
>>
>> Disk /dev/sdp: 1000.2 GB, 1000204886016 bytes
>> 255 heads, 63 sectors/track, 121601 cylinders
>> Units = cylinders of 16065 * 512 = 8225280 bytes
>>
>> # cat /proc/mdstat
>> Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4]
>> unused devices:<none>
>> root@Dahua_Storage:~# cat /etc/mdadm.conf
>> DEVICE /dev/sd*
>> ARRAY /dev/md3 level=raid5 num-devices=5
>> UUID=2d3ac8ef:2dbe2469:b31e3c87:77c5769c
>>     devices=/dev/sdg1,/dev/sdg,/dev/sdf1,/dev/sdf,/dev/sde,/dev/sdd,/dev/sdc
>> ARRAY /dev/md1 level=raid5 num-devices=5
>> UUID=9462a7df:31fca040:023819d9:dbf71832
>>     devices=/dev/sdm1,/dev/sdm,/dev/sdl1,/dev/sdl,/dev/sdk,/dev/sdj,/dev/sdi
>> ARRAY /dev/md2 level=raid5 num-devices=5
>> UUID=5dbc2bdc:9173d426:21a1b5c2:f8b2768a
>>     devices=/dev/sdp,/dev/sdo,/dev/sdn,/dev/sdb1,/dev/sdb,/dev/sda1,/dev/sda
>>
>>
>>
>> There are two strange points:
>> 1. As you see, there are "sdg1" "sdf1" "sdm1" "sdl1" "sdb1" "sda1".
>> These partitions should not exist.
>> 2. The content of /etc/mdadm.conf is abnormal, "sdg1" "sdf1" "sdm1"
>> "sdl1" "sdb1" "sda1" should not be scanned and included.
>>
>>
>>
>>
>>
>>
>>
>> 2011/4/1 Simon McNair<simonmcnair@gmail.com>:
>>> I think the normal thing to try in this situation is:
>>>
>>>   mdadm --assemble --scan
>>>
>>> and if that doesn't work, people normally ask for:
>>>   mdadm -E /dev/sd?? for each appropriate drive which should be in the array
>>>
>>> have a look at dmesg too ?
>>>
>>> I don't know much about md, I just lurk so apologies if you already know
>>> this.
>>>
>>> cheers
>>> Simon
>>>
>>> On 30/03/2011 13:34, hank peng wrote:
>>>> Hi,all:
>>>> I created a raid5 array which consists of 15 disks, before recovering
>>>> is done, a power failure event occured. After power is recovered, the
>>>> machine box started successfully but "cat /proc/mdstat" gave no
>>>> message, previously created raid5 was gone. I check kernel messages,
>>>> it is as follows:
>>>>
>>>> <snip>
>>>> bonding: bond0: enslaving eth1 as a backup interface with a down link.
>>>> svc: failed to register lockdv1 RPC service (errno 97).
>>>> rpc.nfsd used greatest stack depth: 5440 bytes left
>>>> md: md1 stopped.
>>>> iSCSI Enterprise Target Software - version 1.4.1
>>>> </snip>
>>>>
>>>> In normal case, md1 should bind its disks after printing "md: md1
>>>> stopped", then what happened in this cituation?
>>>> BTW, my kernel version is 2.6.31.6.
>>>>
>>>>
>>

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2011-04-01  7:26 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-03-30 12:34 strange problem with my raid5 hank peng
2011-03-31 16:24 ` Simon McNair
2011-04-01  0:19   ` hank peng
2011-04-01  7:22     ` Simon McNair
     [not found]     ` <4D957D04.4040503@gmail.com>
2011-04-01  7:26       ` Simon McNair

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).