linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: NeilBrown <neilb@suse.de>
To: Stephen Haran <steveharan@gmail.com>
Cc: linux-raid@vger.kernel.org
Subject: Re: RAID5 NAS Recovery...00.90.01 vs 00.90
Date: Wed, 21 Nov 2012 08:39:36 +1100	[thread overview]
Message-ID: <20121121083936.1a4d4b13@notabene.brown> (raw)
In-Reply-To: <CAKcp_7ZrR+L5jZBK3cd-z780Cw76XpwMuWmwtUGt=ecwbx_LPg@mail.gmail.com>

[-- Attachment #1: Type: text/plain, Size: 3965 bytes --]

On Tue, 20 Nov 2012 15:41:57 -0500 Stephen Haran <steveharan@gmail.com> wrote:

> Hi, I'm trying to recover a Western Digital Share Space NAS.
> I'm able to assemble the RAID5 and restore the LVM but it can't see
> any filesystem.
> 
> Below is a raid.log file that shows how the raid was configured when
> it was working.
> And also the output of mdadm -D showing the raid in it's current state.
> Note the Version difference 00.90.01 vs. 0.90. And the array size
> difference 2925293760  vs. 2925894144
> I'm thinking this difference may be the reason Linux can not see a filesystem.

Probably not - losing a few blocks from the end might make 'fsck' complain,
but it should still be able to see the filesystem.

How did you test if you could see a filesystem?  'mount' or 'fsck -n' ?

It looks like you re-created the array recently (Nov 18 12:07:53 2012)  Why
did you do that?
It has been created slightly smaller - not sure why.  Maybe if you explicitly
request the old per-device size with "--size=975097920" it might get it right.

Are you sure the dm cow devices show exactly the same size and content as the
originals?

The stray '.01' at the end of the version number is not relevant.  It just
indicates a different version of mdadm in use to report the array.

NeilBrown


> 
> My question is would the version difference explain the array size difference?
> And is it possible to create a version 00.90.01 array? I do not see
> that in the mdadm docs.
> 
> ....original working raid config....
> /dev/md2:
>         Version : 00.90.01
>   Creation Time : Wed Jun 24 19:00:59 2009
>      Raid Level : raid5
>      Array Size : 2925293760 (2789.78 GiB 2995.50 GB)
>     Device Size : 975097920 (929.93 GiB 998.50 GB)
>    Raid Devices : 4
>   Total Devices : 4
> Preferred Minor : 2
>     Persistence : Superblock is persistent
> 
>     Update Time : Thu Jun 25 02:36:31 2009
>           State : clean
>  Active Devices : 4
> Working Devices : 4
>  Failed Devices : 0
>   Spare Devices : 0
> 
>          Layout : left-symmetric
>      Chunk Size : 64K
> 
>            UUID : 6860a291:a5479bc6:e782da22:90dbd792
>          Events : 0.45705
> 
>     Number   Major   Minor   RaidDevice State
>        0       8        4        0      active sync   /dev/sda4
>        1       8       20        1      active sync   /dev/sdb4
>        2       8       36        2      active sync   /dev/sdc4
>        3       8       52        3      active sync   /dev/sdd4
> 
> 
> ....and here is the raid as it stands now. Note the end user I'm
> helping tried to rebuild back on Sunday...
> 
>  % mdadm -D /dev/md2
> /dev/md2:
>         Version : 0.90
>   Creation Time : Sun 
>      Raid Level : raid5
>      Array Size : 2925894144 (2790.35 GiB 2996.12 GB)
>   Used Dev Size : 975298048 (930.12 GiB 998.71 GB)
>    Raid Devices : 4
>   Total Devices : 4
> Preferred Minor : 2
>     Persistence : Superblock is persistent
> 
>     Update Time : Tue Nov 20 16:06:10 2012
>           State : clean
>  Active Devices : 4
> Working Devices : 4
>  Failed Devices : 0
>   Spare Devices : 0
> 
>          Layout : left-symmetric
>      Chunk Size : 64K
> 
>            UUID : 2ac5bacd:b40dc3f5:cb031839:58437670
>          Events : 0.1
> 
>     Number   Major   Minor   RaidDevice State
>        0     253       19        0      active sync   /dev/dm-19  <<<
> Note I am using cow devices via dmsetup
>        1     253       11        1      active sync   /dev/dm-11
>        2     253       15        2      active sync   /dev/dm-15
>        3     253        7        3      active sync   /dev/dm-7
> 
> Thank you for any and all help.
> 
> Regards,
> Stephen
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

  reply	other threads:[~2012-11-20 21:39 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-11-20 20:41 RAID5 NAS Recovery...00.90.01 vs 00.90 Stephen Haran
2012-11-20 21:39 ` NeilBrown [this message]
2012-11-21 17:29   ` Stephen Haran
2012-11-22  5:02     ` NeilBrown
2012-12-07  1:07       ` Stephen Haran

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20121121083936.1a4d4b13@notabene.brown \
    --to=neilb@suse.de \
    --cc=linux-raid@vger.kernel.org \
    --cc=steveharan@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).