linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Recovering RAID5 array
@ 2004-01-20  6:54 Jean Jordaan
  2004-01-20  6:59 ` Neil Brown
  2004-01-20  7:22 ` Guy
  0 siblings, 2 replies; 12+ messages in thread
From: Jean Jordaan @ 2004-01-20  6:54 UTC (permalink / raw)
  To: linux-raid

Hi all

I'm having a RAID week. It looks like 1 disk out of a
3-disk RAID5 array has failed. The array consists of
/dev/hda3 /dev/hdb3 /dev/hdc3 (all 40Gb)
I'm not sure which one is physically faulty. In an attempt
to find out, I did:
   mdadm --manage --set-faulty /dev/md0 /dev/hda3

The consequence of this was 2 disks marked faulty and no
way to get the array up again in order to use raidhotadd
to put that device back.

I'm scared of recreating superblocks and losing all my data.
So now I'm doing 'dd if=/dev/hdb3 of=/dev/hdc2' of all three
RAID partitions so that I can work on a *copy* of the data.

Then I aim to
mdadm --create /dev/md0 --raid-devices=3 --level=5 \
   --spare-devices=1 --chunk=64 --size=37111 \
   /dev/hda1 /dev/hda2 missing /dev/hdb1 /dev/hdb2

hda2 is a copy of the partition of the drive I'm currently
suspecting of failure. hdb2 is a blank partition.

I've been running Seagate's drive diagnostic software
overnight, and the old disks check out clean. This makes me
afraid that it's reiserfs corruption, not a RAID disk
failure :/

Does anyone here have any comments on what I've done so far,
or if there's anything better I can do next?

-- 
Jean Jordaan
http://www.upfrontsystems.co.za


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Recovering RAID5 array
  2004-01-20  6:54 Recovering RAID5 array Jean Jordaan
@ 2004-01-20  6:59 ` Neil Brown
  2004-01-20  7:17   ` Jean Jordaan
  2004-01-20  8:08   ` Jean Jordaan
  2004-01-20  7:22 ` Guy
  1 sibling, 2 replies; 12+ messages in thread
From: Neil Brown @ 2004-01-20  6:59 UTC (permalink / raw)
  To: Jean Jordaan; +Cc: linux-raid

On Tuesday January 20, jean@upfrontsystems.co.za wrote:
> Hi all
> 
> I'm having a RAID week. It looks like 1 disk out of a
> 3-disk RAID5 array has failed. The array consists of
> /dev/hda3 /dev/hdb3 /dev/hdc3 (all 40Gb)
> I'm not sure which one is physically faulty. In an attempt
> to find out, I did:
>    mdadm --manage --set-faulty /dev/md0 /dev/hda3

   mdadm --detail /dev/md0
didn't tell you??

> 
> Does anyone here have any comments on what I've done so far,
> or if there's anything better I can do next?

mdadm --assemble /dev/md0 --force /dev/hd[abc]3

should put it back together for you.

NeilBrown

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Recovering RAID5 array
  2004-01-20  6:59 ` Neil Brown
@ 2004-01-20  7:17   ` Jean Jordaan
  2004-01-20  8:08   ` Jean Jordaan
  1 sibling, 0 replies; 12+ messages in thread
From: Jean Jordaan @ 2004-01-20  7:17 UTC (permalink / raw)
  To: linux-raid

Hi Neil

Thanks for the answer!

>    mdadm --detail /dev/md0
> didn't tell you??

It probably tried, but I was too rattled to parse all the
lun/target/part stuff at that point .. :(

> mdadm --assemble /dev/md0 --force /dev/hd[abc]3

OK, I'll try it on the copy, and if that doesn't work on
the original data, since it isn't really an option to
send the originals for recovery ..

-- 
Jean Jordaan
http://www.upfrontsystems.co.za


^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: Recovering RAID5 array
  2004-01-20  6:54 Recovering RAID5 array Jean Jordaan
  2004-01-20  6:59 ` Neil Brown
@ 2004-01-20  7:22 ` Guy
  1 sibling, 0 replies; 12+ messages in thread
From: Guy @ 2004-01-20  7:22 UTC (permalink / raw)
  To: 'Jean Jordaan', linux-raid

Warning!  Don't create, you could lose data!  Use --assemble --force!!!!!!

In the future...  If you need to determine which disk is which.  Just dd
each disk to /dev/null, and note which disk has an access light on solid!
After you have done this to all of the good disks, then the 1 that is left
must be the bad disk.  Or trace the cables, and decode the jumpers!

Using dd to test a disk, seems like a good test for me.  I have been using
dd for years to verify that a disk works.  I am sure it is not a 100% test,
but is will find a read error!  Just dd a disk to /dev/null, any errors, bad
disk.  After the disk has been removed from your array you could determine
if the bad block(s) can be relocated by the drive.  To do this, dd another
disk to the bad disk.  If success, then do another read test of the "bad"
disk.  If success, then the bad blocks(s) have been relocated.  I wish the
OS or md could do something like this before the disk is dropped from the
array.  It would save a lot of problems.  In this case the bad block(s)
would be over-written with re-constructed data using the redundancy logic.

Also, I don't think a file system could cause a bad disk.

Guy

-----Original Message-----
From: linux-raid-owner@vger.kernel.org
[mailto:linux-raid-owner@vger.kernel.org] On Behalf Of Jean Jordaan
Sent: Tuesday, January 20, 2004 1:55 AM
To: linux-raid@vger.kernel.org
Subject: Recovering RAID5 array

Hi all

I'm having a RAID week. It looks like 1 disk out of a
3-disk RAID5 array has failed. The array consists of
/dev/hda3 /dev/hdb3 /dev/hdc3 (all 40Gb)
I'm not sure which one is physically faulty. In an attempt
to find out, I did:
   mdadm --manage --set-faulty /dev/md0 /dev/hda3

The consequence of this was 2 disks marked faulty and no
way to get the array up again in order to use raidhotadd
to put that device back.

I'm scared of recreating superblocks and losing all my data.
So now I'm doing 'dd if=/dev/hdb3 of=/dev/hdc2' of all three
RAID partitions so that I can work on a *copy* of the data.

Then I aim to
mdadm --create /dev/md0 --raid-devices=3 --level=5 \
   --spare-devices=1 --chunk=64 --size=37111 \
   /dev/hda1 /dev/hda2 missing /dev/hdb1 /dev/hdb2

hda2 is a copy of the partition of the drive I'm currently
suspecting of failure. hdb2 is a blank partition.

I've been running Seagate's drive diagnostic software
overnight, and the old disks check out clean. This makes me
afraid that it's reiserfs corruption, not a RAID disk
failure :/

Does anyone here have any comments on what I've done so far,
or if there's anything better I can do next?

-- 
Jean Jordaan
http://www.upfrontsystems.co.za

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Recovering RAID5 array
  2004-01-20  6:59 ` Neil Brown
  2004-01-20  7:17   ` Jean Jordaan
@ 2004-01-20  8:08   ` Jean Jordaan
  2004-01-20  9:59     ` Neil Brown
  1 sibling, 1 reply; 12+ messages in thread
From: Jean Jordaan @ 2004-01-20  8:08 UTC (permalink / raw)
  To: Neil Brown; +Cc: linux-raid

> mdadm --assemble /dev/md0 --force /dev/hd[abc]3
> 
> should put it back together for you.

No luck ..

cdimage root # mdadm --verbose --assemble /dev/md0 --force /dev/hda3 /dev/hdb3 
/dev/hdc3
mdadm: looking for devices for /dev/md0
mdadm: /dev/hda3 is identified as a member of /dev/md0, slot 3.
mdadm: /dev/hdb3 is identified as a member of /dev/md0, slot 4.
mdadm: /dev/hdc3 is identified as a member of /dev/md0, slot 2.
mdadm: no uptodate device for slot 0 of /dev/md0
mdadm: no uptodate device for slot 1 of /dev/md0
mdadm: added /dev/hda3 to /dev/md0 as 3
mdadm: added /dev/hdb3 to /dev/md0 as 4
mdadm: added /dev/hdc3 to /dev/md0 as 2
mdadm: /dev/md0 assembled from 1 drive - not enough to start it (use --run to 
insist).

cdimage root # cat /proc/mdstat
Personalities : [raid5]
read_ahead not set
md0 : inactive ide/host0/bus1/target0/lun0/part3[2] 
ide/host0/bus0/target1/lun0/part3[4] ide/host0/bus0/target0/lun0/part3[3]
       0 blocks
unused devices: <none>
cdimage root # mdadm --verbose --examine /dev/hda3
/dev/hda3:
           Magic : a92b4efc
         Version : 00.90.00
            UUID : dd5156aa:9157bc3c:9500db42:445b91fe
   Creation Time : Wed Dec 17 11:44:50 2003
      Raid Level : raid5
     Device Size : 38001664 (36.24 GiB 38.91 GB)
    Raid Devices : 3
   Total Devices : 4
Preferred Minor : 0

     Update Time : Mon Jan 19 07:41:21 2004
           State : dirty, no-errors
  Active Devices : 1
Working Devices : 3
  Failed Devices : 1
   Spare Devices : 2
        Checksum : 736178ae - correct
          Events : 0.82

          Layout : left-symmetric
      Chunk Size : 64K

       Number   Major   Minor   RaidDevice State
this     3       3        3        3        /dev/ide/host0/bus0/target0/lun0/part3
    0     0       0        0        0      faulty removed
    1     1       0        0        1      faulty removed
    2     2      22        3        2      active sync 
/dev/ide/host0/bus1/target0/lun0/part3
    3     3       3        3        3        /dev/ide/host0/bus0/target0/lun0/part3
    4     4       3       67        4        /dev/ide/host0/bus0/target1/lun0/part3

I think /dev/hdb3 was the one originally marked faulty,
and I wrongly --set-faulty /dev/hda3 ..

-- 
Jean Jordaan
http://www.upfrontsystems.co.za


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Recovering RAID5 array
  2004-01-20  8:08   ` Jean Jordaan
@ 2004-01-20  9:59     ` Neil Brown
  2004-01-20 10:40       ` Jean Jordaan
  2004-01-20 10:44       ` Jean Jordaan
  0 siblings, 2 replies; 12+ messages in thread
From: Neil Brown @ 2004-01-20  9:59 UTC (permalink / raw)
  To: Jean Jordaan; +Cc: linux-raid

On Tuesday January 20, jean@upfrontsystems.co.za wrote:
> > mdadm --assemble /dev/md0 --force /dev/hd[abc]3
> > 
> > should put it back together for you.
> 
> No luck ..
> 
> cdimage root # mdadm --verbose --assemble /dev/md0 --force /dev/hda3 /dev/hdb3 
> /dev/hdc3
> mdadm: looking for devices for /dev/md0
> mdadm: /dev/hda3 is identified as a member of /dev/md0, slot 3.
> mdadm: /dev/hdb3 is identified as a member of /dev/md0, slot 4.
> mdadm: /dev/hdc3 is identified as a member of /dev/md0, slot 2.
> mdadm: no uptodate device for slot 0 of /dev/md0
> mdadm: no uptodate device for slot 1 of /dev/md0

Looks like you must have done a hot-add in the mean time....

I would try:

 mdadm -C /dev/md0 -l 5 -n 3 /dev/hda3 missing /dev/hdc3

and check "fsck -f -n" to check the the filesystem looks OK.

This 'mdadm' command will not change any data except the raid
superblocks, which are already a mess anyway.

If the looks OK, you can then 
   mdadm /dev/md0 --add /dev/hdb3
and see if hdb is working after all.

If it didn't look OK, stop the array and try create the array with a
different combination of devices.

As long as you create a raid5 array with one missing device and only
read from the resulting device, no data apart from the superblock will
be corrupted.  Once you hot-add, or create without a missing device
data could get corrupted if the array hasn't been assembled correctly.

NeilBrown

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Recovering RAID5 array
  2004-01-20  9:59     ` Neil Brown
@ 2004-01-20 10:40       ` Jean Jordaan
  2004-01-20 12:10         ` Maarten v d Berg
  2004-01-20 10:44       ` Jean Jordaan
  1 sibling, 1 reply; 12+ messages in thread
From: Jean Jordaan @ 2004-01-20 10:40 UTC (permalink / raw)
  To: Neil Brown; +Cc: linux-raid

Neil,

thank you very much for your help ..

>  mdadm -C /dev/md0 -l 5 -n 3 /dev/hda3 missing /dev/hdc3

Did that, once with 'missing' in each place. All I get:

cdimage root # mdadm --create /dev/md0 --raid-devices=3 --level=5 
--spare-devices=0 --chunk=64 missing /dev/hdb3 /dev/hdc3
mdadm: /dev/hdb3 appears to be part of a raid array:
     level=5 devices=3 ctime=Tue Jan 20 10:35:45 2004
mdadm: /dev/hdc3 appears to be part of a raid array:
     level=5 devices=3 ctime=Tue Jan 20 10:09:43 2004
Continue creating array? y
mdadm: array /dev/md0 started.
cdimage root # mount -r -t reiserfs /dev/md0 /mnt/gentoo/raid/
mount: Not a directory

cdimage root # cat /proc/mdstat
Personalities : [raid5]
read_ahead 1024 sectors
md0 : active raid5 ide/host0/bus1/target0/lun0/part3[2] 
ide/host0/bus0/target1/lun0/part3[1]
       76003328 blocks level 5, 64k chunk, algorithm 2 [3/2] [_UU]

unused devices: <none>


-- 
Jean Jordaan
http://www.upfrontsystems.co.za


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Recovering RAID5 array
  2004-01-20  9:59     ` Neil Brown
  2004-01-20 10:40       ` Jean Jordaan
@ 2004-01-20 10:44       ` Jean Jordaan
  1 sibling, 0 replies; 12+ messages in thread
From: Jean Jordaan @ 2004-01-20 10:44 UTC (permalink / raw)
  To: Neil Brown; +Cc: linux-raid

>>cdimage root # mdadm --verbose --assemble /dev/md0 --force /dev/hda3 /dev/hdb3 
>>/dev/hdc3
>>mdadm: looking for devices for /dev/md0
>>mdadm: /dev/hda3 is identified as a member of /dev/md0, slot 3.
>>mdadm: /dev/hdb3 is identified as a member of /dev/md0, slot 4.
>>mdadm: /dev/hdc3 is identified as a member of /dev/md0, slot 2.
>>mdadm: no uptodate device for slot 0 of /dev/md0
>>mdadm: no uptodate device for slot 1 of /dev/md0
> 
> Looks like you must have done a hot-add in the mean time....

What I did: first --set-faulty /dev/hda3. After that I tried to
add it back, but got an error reporting that the array isn't
running because there aren't enough components of it available,
and that the device could therefor not be added.

-- 
Jean Jordaan
http://www.upfrontsystems.co.za


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Recovering RAID5 array
  2004-01-20 10:40       ` Jean Jordaan
@ 2004-01-20 12:10         ` Maarten v d Berg
  2004-01-20 12:23           ` Jean Jordaan
  0 siblings, 1 reply; 12+ messages in thread
From: Maarten v d Berg @ 2004-01-20 12:10 UTC (permalink / raw)
  To: linux-raid

On Tuesday 20 January 2004 11:40, Jean Jordaan wrote:
> Neil,
>
> thank you very much for your help ..
>
> >  mdadm -C /dev/md0 -l 5 -n 3 /dev/hda3 missing /dev/hdc3
>
> Did that, once with 'missing' in each place. All I get:
>
> cdimage root # mdadm --create /dev/md0 --raid-devices=3 --level=5
> --spare-devices=0 --chunk=64 missing /dev/hdb3 /dev/hdc3
> mdadm: /dev/hdb3 appears to be part of a raid array:
>      level=5 devices=3 ctime=Tue Jan 20 10:35:45 2004
> mdadm: /dev/hdc3 appears to be part of a raid array:
>      level=5 devices=3 ctime=Tue Jan 20 10:09:43 2004
> Continue creating array? y
> mdadm: array /dev/md0 started.
> cdimage root # mount -r -t reiserfs /dev/md0 /mnt/gentoo/raid/
> mount: Not a directory

Note that it says "Not a directory".  Not something like "can't read 
superblock" or "is not a valid block device" or similar errors which would 
indicate an error with the md array. 
So, maybe mount is right and /mnt/gentoo/raid/ IS actually wrong...?

Furthermore, you're not realizing that reiserfs will try to replay its 
transactions when you attempt a mount, even if mounting RO afaik.
So maybe more prudent would be a reiserfsck --check...

Maarten


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Recovering RAID5 array
  2004-01-20 12:10         ` Maarten v d Berg
@ 2004-01-20 12:23           ` Jean Jordaan
  2004-01-20 12:57             ` Maarten v d Berg
  0 siblings, 1 reply; 12+ messages in thread
From: Jean Jordaan @ 2004-01-20 12:23 UTC (permalink / raw)
  To: maarten; +Cc: linux-raid

Hi Maarten

> So, maybe mount is right and /mnt/gentoo/raid/ IS actually wrong...?

I.e. the filesystem under RAID is corrupt? This is what I'm afraid of.

> So maybe more prudent would be a reiserfsck --check...

Oooh! Looks like something worked .. still can't mount though ..
should I try --rebuild-tree next? :

cdimage root # reiserfsck --check /dev/md0

<-------------reiserfsck, 2003------------->
reiserfsprogs 3.6.8

[...]
Will read-only check consistency of the filesystem on /dev/md0
Will put log info to 'stdout'

Do you want to run this program?[N/Yes] (note need to type Yes if you do):Yes
###########
reiserfsck --check started at Tue Jan 20 12:19:38 2004
###########
Replaying journal..
0 transactions replayed
Checking internal tree../  1 (of   2)/  1 (of 126)/  1 (of 153)block 8211: The 
level of the node (25938) is not correct, (1) expected
  the problem in the internal node occured (8211), whole subtree is skipped
finished
Comparing bitmaps..vpf-10640: The on-disk and the correct bitmaps differs.
Bad nodes were found, Semantic pass skipped
1 found corruptions can be fixed only during --rebuild-tree
###########
reiserfsck finished at Tue Jan 20 12:20:11 2004
###########
cdimage root # mount -r -t reiserfs /dev/md0 /mnt/gentoo/raid/
mount: Not a directory

(Apologies for direct mail but speed is of the essence .. )

-- 
Jean Jordaan
http://www.upfrontsystems.co.za


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Recovering RAID5 array
  2004-01-20 12:23           ` Jean Jordaan
@ 2004-01-20 12:57             ` Maarten v d Berg
  2004-01-20 13:28               ` Jean Jordaan
  0 siblings, 1 reply; 12+ messages in thread
From: Maarten v d Berg @ 2004-01-20 12:57 UTC (permalink / raw)
  To: linux-raid

On Tuesday 20 January 2004 13:23, Jean Jordaan wrote:
> Hi Maarten
>
> > So, maybe mount is right and /mnt/gentoo/raid/ IS actually wrong...?
>
> I.e. the filesystem under RAID is corrupt? This is what I'm afraid of.

Not neccessarily... just that /mnt/gentoo/raid/ is NOT a directory.

First things first.  Why don't you try to mount /dev/md0 on /mnt ?

> > So maybe more prudent would be a reiserfsck --check...
>
> Oooh! Looks like something worked .. still can't mount though ..
> should I try --rebuild-tree next? :

That choice is entirely up to you.  Please be well aware that such choices may 
determine the fate of your data on /dev/md0...!!  So better think twice.
Looking at the errors you may want to experiment with the order in which you 
assembled the md device. Then you can compare the severity of the errors that 
reiserfsck reports. Once you choose rebuild-tree there is NO way back.

But don't ask me, I'm no real expert on these questions. In fact, several 
months ago people on this list helped me solve MY raid5 two-disk failure... 

good luck
Maarten

> cdimage root # reiserfsck --check /dev/md0
>
> <-------------reiserfsck, 2003------------->
> reiserfsprogs 3.6.8
>
> [...]
> Will read-only check consistency of the filesystem on /dev/md0
> Will put log info to 'stdout'
>
> Do you want to run this program?[N/Yes] (note need to type Yes if you
> do):Yes ###########
> reiserfsck --check started at Tue Jan 20 12:19:38 2004
> ###########
> Replaying journal..
> 0 transactions replayed
> Checking internal tree../  1 (of   2)/  1 (of 126)/  1 (of 153)block 8211:
> The level of the node (25938) is not correct, (1) expected
>   the problem in the internal node occured (8211), whole subtree is skipped
> finished
> Comparing bitmaps..vpf-10640: The on-disk and the correct bitmaps differs.
> Bad nodes were found, Semantic pass skipped
> 1 found corruptions can be fixed only during --rebuild-tree
> ###########
> reiserfsck finished at Tue Jan 20 12:20:11 2004
> ###########
> cdimage root # mount -r -t reiserfs /dev/md0 /mnt/gentoo/raid/
> mount: Not a directory
>
> (Apologies for direct mail but speed is of the essence .. )


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Recovering RAID5 array
  2004-01-20 12:57             ` Maarten v d Berg
@ 2004-01-20 13:28               ` Jean Jordaan
  0 siblings, 0 replies; 12+ messages in thread
From: Jean Jordaan @ 2004-01-20 13:28 UTC (permalink / raw)
  To: linux-raid

Hi Maarten, Neil,

It worked! For the record, my rebuild-tree output is below.
It looks like that reiserfs was sick, sick, sick.

Thank you very much for your cautious, reasoned and calm
responses.

Everything ended up in lost+found, but I could retrieve the
one critical file, namely

/mnt/gentoo/raid/lost+found/2_5369/lib/zope/zope-pendrums/var/Data.fs

cdimage root # reiserfsck --rebuild-tree /dev/md0

<-------------reiserfsck, 2003------------->
reiserfsprogs 3.6.8

[...]
Will rebuild the filesystem (/dev/md0) tree
Will put log info to 'stdout'

Do you want to run this program?[N/Yes] (note need to type Yes if you do):Yes
Replaying journal..
0 transactions replayed
###########
reiserfsck --rebuild-tree started at Tue Jan 20 12:44:02 2004
###########

Pass 0:
####### Pass 0 #######
Loading on-disk bitmap .. ok, 647544 blocks marked used
Skipping 8790 blocks (super block, journal, bitmaps) 638754 blocks will be read
0%....20%....40%....60%....80%....100%                       left 0, 22026 /sec
193779 directory entries were hashed with "r5" hash.
         "r5" hash is selected
Flushing..finished
         Read blocks (but not data blocks) 638754
                 Leaves among those 38854
                 Objectids found 8553

Pass 1 (will try to insert 38854 leaves):
####### Pass 1 #######
Looking for allocable blocks .. finished
0%....20%....40%....60%....80%....100%                        left 0, 3532 /sec
Flushing..finished
         38854 leaves read
                 38778 inserted
                         - pointers in indirect items pointing to metadata 3 
(zeroed)
                 76 not inserted
####### Pass 2 #######

Pass 2:
0%....20%....40%....60%....80%....100%                           left 0, 0 /sec
Flushing..finished
         Leaves inserted item by item 76
Pass 3 (semantic):
####### Pass 3 #########
Flushing..finished
         Files found: 0
         Directories found: 2
Pass 3a (looking for lost dir/files):
####### Pass 3a (lost+found pass) #########
Looking for lost directories:
/2_4vpf-10680: The directory [2 4] has the wrong block count in the StatData (1) 
- corrected to (2)
vpf-10650: The directory [2 4] has the wrong size in the StatData (48) - 
corrected to (752)
/2_113get_next_directory_item: The entry ".." of the directory [2 113] pointes 
to [1 2], instead of [2 258218] - corrected
/2_5022get_next_directory_item: The entry ".." of the directory [2 5022] pointes 
to [1 2], instead of [2 258218] - corrected
/2_5212get_next_directory_item: The entry ".." of the directory [2 5212] pointes 
to [1 2], instead of [2 258218] - corrected
/2_5360get_next_directory_item: The entry ".." of the directory [2 5360] pointes 
to [1 2], instead of [2 258218] - corrected
/2_5365get_next_directory_item: The entry ".." of the directory [2 5365] pointes 
to [1 2], instead of [2 258218] - corrected
/2_5367get_next_directory_item: The entry ".." of the directory [2 5367] pointes 
to [1 2], instead of [2 258218] - corrected
/2_5369get_next_directory_item: The entry ".." of the directory [2 5369] pointes 
to [1 2], instead of [2 258218] - corrected
/2_5369/log/wtmpvpf-10680: The file [7261 7264] has the wrong block count in the 
StatData (1152) - corrected to (1128)
/2_7587get_next_directory_item: The entry ".." of the directory [2 7587] pointes 
to [1 2], instead of [2 258218] - corrected
/2_26686get_next_directory_item: The entry ".." of the directory [2 26686] 
pointes to [1 2], instead of [2 258218] - corrected
/2_26689get_next_directory_item: The entry ".." of the directory [2 26689] 
pointes to [1 2], instead of [2 258218] - corrected
/2_26784get_next_directory_item: The entry ".." of the directory [2 26784] 
pointes to [1 2], instead of [2 258218] - corrected
Looking for lost files:
The object [141767 141772] has wrong mode (b--xr--r-x) - corrected to -rw-------
vpf-10670: The file [141767 141772] has the wrong size in the StatData (0) - 
corrected to (1912)
vpf-10680: The file [141767 141772] has the wrong block count in the StatData 
(0) - corrected to (8)
The object [141778 141818] has wrong mode (?---------) - corrected to -rw-------
vpf-10670: The file [141778 141818] has the wrong size in the StatData (0) - 
corrected to (1640)
vpf-10680: The file [141778 141818] has the wrong block count in the StatData 
(0) - corrected to (8)
The object [141836 141840] has wrong mode (?---------) - corrected to -rw-------
Flushing..finished
         Objects without names 19557
         Empty lost dirs removed 166439
         Dirs linked to /lost+found: 12
                 Dirs without stat data found 1
         Files linked to /lost+found 1975
         Objects having used objectids: 4887
                 dirs fixed 2
Pass 4 - finished       done 0, 0 /sec
         Deleted unreachable items 6
Flushing..finished
Syncing..finished
###########
reiserfsck finished at Tue Jan 20 12:46:00 2004
###########
cdimage root # mount -r -t reiserfs /dev/md0 /mnt/gentoo/raid/
cdimage root # ls /mnt/gentoo/raid/
lost+found

-- 
Jean Jordaan
http://www.upfrontsystems.co.za


^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2004-01-20 13:28 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-01-20  6:54 Recovering RAID5 array Jean Jordaan
2004-01-20  6:59 ` Neil Brown
2004-01-20  7:17   ` Jean Jordaan
2004-01-20  8:08   ` Jean Jordaan
2004-01-20  9:59     ` Neil Brown
2004-01-20 10:40       ` Jean Jordaan
2004-01-20 12:10         ` Maarten v d Berg
2004-01-20 12:23           ` Jean Jordaan
2004-01-20 12:57             ` Maarten v d Berg
2004-01-20 13:28               ` Jean Jordaan
2004-01-20 10:44       ` Jean Jordaan
2004-01-20  7:22 ` Guy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).