linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* once again raid5
@ 2005-03-31 12:41 Ronny Plattner
  2005-03-31 12:56 ` Neil Brown
  0 siblings, 1 reply; 4+ messages in thread
From: Ronny Plattner @ 2005-03-31 12:41 UTC (permalink / raw)
  To: linux-raid

Hi,

we still have troubles with our raid5 array. You can find the history of 
the fault in detail in my other postings (11.3.2005).

I will show you my attempts.

There are 4 discs (Maxtor 250GB) in a raid5-array. One disc failed and 
we sent it back to Maxtor. Now, the array consists of 3 discs.
I tried to reassemble it,

mdadm -A --run --force /dev/md2 /dev/hdi1 /dev/hdk1  /dev/hdo1

but i got an error:

-snip-
mdadm: failed to RUN_ARRAY /dev/md2: Invalid argument
-snap-

...in /var/log/messages

-snip-
Mar 31 14:29:00 server kernel: md: md2 stopped.
Mar 31 14:29:00 server kernel: md: bind<hdo1>
Mar 31 14:29:00 server kernel: md: bind<hdi1>
Mar 31 14:29:00 server kernel: md: bind<hdk1>
Mar 31 14:29:00 server kernel: raid5: device hdk1 operational as raid disk 1
Mar 31 14:29:00 server kernel: raid5: device hdo1 operational as raid disk 3
Mar 31 14:29:00 server kernel: RAID5 conf printout:
Mar 31 14:29:00 server kernel:  --- rd:4 wd:2 fd:2
Mar 31 14:29:00 server kernel:  disk 1, o:1, dev:hdk1
Mar 31 14:29:00 server kernel:  disk 3, o:1, dev:hdo1
-snap-

Amazingly, /proc/mdstat displays the md2-array !

-snip-
md2 : inactive hdk1[1] hdi1[4] hdo1[3]
       735334848 blocks
-snap-

So, md2 is there, but inactive. I tried

mdadm --detail /dev/md2


-snip-
mdadm  --detail /dev/md2
/dev/md2:
         Version : 00.90.01
   Creation Time : Mon Jun 14 18:43:20 2004
      Raid Level : raid5
     Device Size : 245111616 (233.76 GiB 250.99 GB)
    Raid Devices : 4
   Total Devices : 3
Preferred Minor : 2
     Persistence : Superblock is persistent

     Update Time : Sun Mar  6 16:40:29 2005
           State : active, degraded
  Active Devices : 2
Working Devices : 3
  Failed Devices : 0
   Spare Devices : 1

          Layout : left-symmetric
      Chunk Size : 64K

            UUID : 72c49e3a:de37c4f8:00a6d8a2:e8fddb2c
          Events : 0.60470022

     Number   Major   Minor   RaidDevice State
        0       0        0        -      removed
        1      57        1        1      active sync   /dev/hdk1
        2       0        0        -      removed
        3      89        1        3      active sync   /dev/hdo1

        4      56        1        -      spare   /dev/hdi1
-snap-

Okay...more details..

mdadm  --detail --test --brief /dev/md2

-snip-
ARRAY /dev/md2 level=raid5 num-devices=4 spares=1 
UUID=72c49e3a:de37c4f8:00a6d8a2:e8fddb2c
    devices=/dev/hdk1,/dev/hdo1,/dev/hdi1
-snap-

No more information :-(


Anyone can give me useful hints?

Regards
Ronny


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: once again raid5
  2005-03-31 12:41 once again raid5 Ronny Plattner
@ 2005-03-31 12:56 ` Neil Brown
  2005-03-31 13:15   ` Ronny Plattner
  2005-04-03 16:56   ` Ronny Plattner
  0 siblings, 2 replies; 4+ messages in thread
From: Neil Brown @ 2005-03-31 12:56 UTC (permalink / raw)
  To: Ronny Plattner; +Cc: linux-raid

On Thursday March 31, ronny_p@gmx.net wrote:
> Hi,
> 
> we still have troubles with our raid5 array. You can find the history of 
> the fault in detail in my other postings (11.3.2005).
> 
> I will show you my attempts.
> 
> There are 4 discs (Maxtor 250GB) in a raid5-array. One disc failed and 
> we sent it back to Maxtor. Now, the array consists of 3 discs.
> I tried to reassemble it,
> 
> mdadm -A --run --force /dev/md2 /dev/hdi1 /dev/hdk1  /dev/hdo1
> 
> but i got an error:
> 
> -snip-
> mdadm: failed to RUN_ARRAY /dev/md2: Invalid argument
> -snap-

It looks like hdi1 doesn't think it is an active part of the array. 
It is just a spare. 
It is as-though the array was not fully synced when hdm1 (?) failed.

Looking back through previous emails, it looks like you had 2 drive
fail in a raid5 array.  This means you lose. :-(

Your best bet would be:

  mdadm --create /dev/md2 --level 5 -n 4 /dev/hda1 /dev/hdk1 missing  /dev/hdo1

and hope that the data you find on md2 isn't too corrupted.  You might be
lucky, but I'm not holding my breath - sorry.

NeilBrown

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: once again raid5
  2005-03-31 12:56 ` Neil Brown
@ 2005-03-31 13:15   ` Ronny Plattner
  2005-04-03 16:56   ` Ronny Plattner
  1 sibling, 0 replies; 4+ messages in thread
From: Ronny Plattner @ 2005-03-31 13:15 UTC (permalink / raw)
  To: linux-raid

Hello,

Neil Brown schrieb:
> It looks like hdi1 doesn't think it is an active part of the array. 
> It is just a spare. 
> It is as-though the array was not fully synced when hdm1 (?) failed.

Mhm.

> Looking back through previous emails, it looks like you had 2 drive
> fail in a raid5 array.  This means you lose. :-(

hdm is since 2 weeks stable with 3 reallocated sectors...so, maybe no 
much data is lost.

> Your best bet would be:
> 
>   mdadm --create /dev/md2 --level 5 -n 4 /dev/hda1 /dev/hdk1 missing  /dev/hdo1
> 
> and hope that the data you find on md2 isn't too corrupted.  You might be

Okay. But, isnt it better to use build instead of create? In the the 
manpages (printed 5.4.2004) ...i can see

-snip-
mdadm -build device ..... -raid-devices=Z devices

This usage is similar to --create. The difference is that it creates a 
legacy array without a superblock. With these arrays there is no 
difference between initially creating the array and subsequently 
assembling the array, except that hopefully there is useful data there 
in the second case.
-snap-

> lucky, but I'm not holding my breath - sorry.

Thank you :-)  ... so, there are no problems with the superblocks?

Regards,
Ronny


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: once again raid5
  2005-03-31 12:56 ` Neil Brown
  2005-03-31 13:15   ` Ronny Plattner
@ 2005-04-03 16:56   ` Ronny Plattner
  1 sibling, 0 replies; 4+ messages in thread
From: Ronny Plattner @ 2005-04-03 16:56 UTC (permalink / raw)
  To: linux-raid

Hi !

Neil Brown schrieb:
> Your best bet would be:
> 
>   mdadm --create /dev/md2 --level 5 -n 4 /dev/hda1 /dev/hdk1 missing  /dev/hdo1
> 
> and hope that the data you find on md2 isn't too corrupted.  You might be
> lucky, but I'm not holding my breath - sorry.

This worked AFAIS but there are troubles with the filesystems ( no 
superblocks ...ext3,xfs) :-(

-snip-
~# mdadm  --detail /dev/md2
/dev/md2:
         Version : 00.90.01
   Creation Time : Sun Apr  3 12:34:42 2005
      Raid Level : raid5
      Array Size : 735334848 (701.27 GiB 752.98 GB)
     Device Size : 245111616 (233.76 GiB 250.99 GB)
    Raid Devices : 4
   Total Devices : 3
Preferred Minor : 2
     Persistence : Superblock is persistent

     Update Time : Sun Apr  3 12:34:42 2005
           State : clean, degraded
  Active Devices : 3
Working Devices : 3
  Failed Devices : 0
   Spare Devices : 0

          Layout : left-symmetric
      Chunk Size : 64K

            UUID : 39e90883:c8f824d7:16732793:9ba70289
          Events : 0.60470023

     Number   Major   Minor   RaidDevice State
        0      56        1        0      active sync   /dev/hdi1
        1      57        1        1      active sync   /dev/hdk1
        2       0        0        -      removed
        3      89        1        3      active sync   /dev/hdo1
-snap-


Thanks
Ronny

ps: For people which are interested in...

-snip-
server:~# mount -t ext3 /dev/mapper/raid5--volume-home /mnt/data
mount: wrong fs type, bad option, bad superblock on 
/dev/mapper/raid5--volume-home,
        or too many mounted file systems
        (could this be the IDE device where you in fact use
        ide-scsi so that sr0 or sda or so is needed?)

server:~# mount -t xfs /dev/mapper/raid5--volume-data  /mnt/data
mount: /dev/mapper/raid5--volume-data: can't read superblock
-snap-



^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2005-04-03 16:56 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2005-03-31 12:41 once again raid5 Ronny Plattner
2005-03-31 12:56 ` Neil Brown
2005-03-31 13:15   ` Ronny Plattner
2005-04-03 16:56   ` Ronny Plattner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).