* RAID5 NAS Recovery...00.90.01 vs 00.90
@ 2012-11-20 20:41 Stephen Haran
2012-11-20 21:39 ` NeilBrown
0 siblings, 1 reply; 5+ messages in thread
From: Stephen Haran @ 2012-11-20 20:41 UTC (permalink / raw)
To: linux-raid
Hi, I'm trying to recover a Western Digital Share Space NAS.
I'm able to assemble the RAID5 and restore the LVM but it can't see
any filesystem.
Below is a raid.log file that shows how the raid was configured when
it was working.
And also the output of mdadm -D showing the raid in it's current state.
Note the Version difference 00.90.01 vs. 0.90. And the array size
difference 2925293760 vs. 2925894144
I'm thinking this difference may be the reason Linux can not see a filesystem.
My question is would the version difference explain the array size difference?
And is it possible to create a version 00.90.01 array? I do not see
that in the mdadm docs.
....original working raid config....
/dev/md2:
Version : 00.90.01
Creation Time : Wed Jun 24 19:00:59 2009
Raid Level : raid5
Array Size : 2925293760 (2789.78 GiB 2995.50 GB)
Device Size : 975097920 (929.93 GiB 998.50 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Thu Jun 25 02:36:31 2009
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
UUID : 6860a291:a5479bc6:e782da22:90dbd792
Events : 0.45705
Number Major Minor RaidDevice State
0 8 4 0 active sync /dev/sda4
1 8 20 1 active sync /dev/sdb4
2 8 36 2 active sync /dev/sdc4
3 8 52 3 active sync /dev/sdd4
....and here is the raid as it stands now. Note the end user I'm
helping tried to rebuild back on Sunday...
% mdadm -D /dev/md2
/dev/md2:
Version : 0.90
Creation Time : Sun Nov 18 12:07:53 2012
Raid Level : raid5
Array Size : 2925894144 (2790.35 GiB 2996.12 GB)
Used Dev Size : 975298048 (930.12 GiB 998.71 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Tue Nov 20 16:06:10 2012
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
UUID : 2ac5bacd:b40dc3f5:cb031839:58437670
Events : 0.1
Number Major Minor RaidDevice State
0 253 19 0 active sync /dev/dm-19 <<<
Note I am using cow devices via dmsetup
1 253 11 1 active sync /dev/dm-11
2 253 15 2 active sync /dev/dm-15
3 253 7 3 active sync /dev/dm-7
Thank you for any and all help.
Regards,
Stephen
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: RAID5 NAS Recovery...00.90.01 vs 00.90
2012-11-20 20:41 RAID5 NAS Recovery...00.90.01 vs 00.90 Stephen Haran
@ 2012-11-20 21:39 ` NeilBrown
2012-11-21 17:29 ` Stephen Haran
0 siblings, 1 reply; 5+ messages in thread
From: NeilBrown @ 2012-11-20 21:39 UTC (permalink / raw)
To: Stephen Haran; +Cc: linux-raid
[-- Attachment #1: Type: text/plain, Size: 3965 bytes --]
On Tue, 20 Nov 2012 15:41:57 -0500 Stephen Haran <steveharan@gmail.com> wrote:
> Hi, I'm trying to recover a Western Digital Share Space NAS.
> I'm able to assemble the RAID5 and restore the LVM but it can't see
> any filesystem.
>
> Below is a raid.log file that shows how the raid was configured when
> it was working.
> And also the output of mdadm -D showing the raid in it's current state.
> Note the Version difference 00.90.01 vs. 0.90. And the array size
> difference 2925293760 vs. 2925894144
> I'm thinking this difference may be the reason Linux can not see a filesystem.
Probably not - losing a few blocks from the end might make 'fsck' complain,
but it should still be able to see the filesystem.
How did you test if you could see a filesystem? 'mount' or 'fsck -n' ?
It looks like you re-created the array recently (Nov 18 12:07:53 2012) Why
did you do that?
It has been created slightly smaller - not sure why. Maybe if you explicitly
request the old per-device size with "--size=975097920" it might get it right.
Are you sure the dm cow devices show exactly the same size and content as the
originals?
The stray '.01' at the end of the version number is not relevant. It just
indicates a different version of mdadm in use to report the array.
NeilBrown
>
> My question is would the version difference explain the array size difference?
> And is it possible to create a version 00.90.01 array? I do not see
> that in the mdadm docs.
>
> ....original working raid config....
> /dev/md2:
> Version : 00.90.01
> Creation Time : Wed Jun 24 19:00:59 2009
> Raid Level : raid5
> Array Size : 2925293760 (2789.78 GiB 2995.50 GB)
> Device Size : 975097920 (929.93 GiB 998.50 GB)
> Raid Devices : 4
> Total Devices : 4
> Preferred Minor : 2
> Persistence : Superblock is persistent
>
> Update Time : Thu Jun 25 02:36:31 2009
> State : clean
> Active Devices : 4
> Working Devices : 4
> Failed Devices : 0
> Spare Devices : 0
>
> Layout : left-symmetric
> Chunk Size : 64K
>
> UUID : 6860a291:a5479bc6:e782da22:90dbd792
> Events : 0.45705
>
> Number Major Minor RaidDevice State
> 0 8 4 0 active sync /dev/sda4
> 1 8 20 1 active sync /dev/sdb4
> 2 8 36 2 active sync /dev/sdc4
> 3 8 52 3 active sync /dev/sdd4
>
>
> ....and here is the raid as it stands now. Note the end user I'm
> helping tried to rebuild back on Sunday...
>
> % mdadm -D /dev/md2
> /dev/md2:
> Version : 0.90
> Creation Time : Sun
> Raid Level : raid5
> Array Size : 2925894144 (2790.35 GiB 2996.12 GB)
> Used Dev Size : 975298048 (930.12 GiB 998.71 GB)
> Raid Devices : 4
> Total Devices : 4
> Preferred Minor : 2
> Persistence : Superblock is persistent
>
> Update Time : Tue Nov 20 16:06:10 2012
> State : clean
> Active Devices : 4
> Working Devices : 4
> Failed Devices : 0
> Spare Devices : 0
>
> Layout : left-symmetric
> Chunk Size : 64K
>
> UUID : 2ac5bacd:b40dc3f5:cb031839:58437670
> Events : 0.1
>
> Number Major Minor RaidDevice State
> 0 253 19 0 active sync /dev/dm-19 <<<
> Note I am using cow devices via dmsetup
> 1 253 11 1 active sync /dev/dm-11
> 2 253 15 2 active sync /dev/dm-15
> 3 253 7 3 active sync /dev/dm-7
>
> Thank you for any and all help.
>
> Regards,
> Stephen
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: RAID5 NAS Recovery...00.90.01 vs 00.90
2012-11-20 21:39 ` NeilBrown
@ 2012-11-21 17:29 ` Stephen Haran
2012-11-22 5:02 ` NeilBrown
0 siblings, 1 reply; 5+ messages in thread
From: Stephen Haran @ 2012-11-21 17:29 UTC (permalink / raw)
To: NeilBrown; +Cc: linux-raid
Thank you Neil for your reply comments below...
On Tue, Nov 20, 2012 at 4:39 PM, NeilBrown <neilb@suse.de> wrote:
> On Tue, 20 Nov 2012 15:41:57 -0500 Stephen Haran <steveharan@gmail.com> wrote:
>
>> Hi, I'm trying to recover a Western Digital Share Space NAS.
>> I'm able to assemble the RAID5 and restore the LVM but it can't see
>> any filesystem.
>>
>> Below is a raid.log file that shows how the raid was configured when
>> it was working.
>> And also the output of mdadm -D showing the raid in it's current state.
>> Note the Version difference 00.90.01 vs. 0.90. And the array size
>> difference 2925293760 vs. 2925894144
>> I'm thinking this difference may be the reason Linux can not see a filesystem.
>
> Probably not - losing a few blocks from the end might make 'fsck' complain,
> but it should still be able to see the filesystem.
>
> How did you test if you could see a filesystem? 'mount' or 'fsck -n' ?
Yes I tried both mount and fsck. It can't find the superblock.
Testdisk finds ext3 partitions but can not see data.
But looking with hexedit the data appears to still be there.
> It looks like you re-created the array recently (Nov 18 12:07:53 2012) Why
> did you do that?
The end user attempted a firmware upgrade on the NAS box and could not access
their data afterwards. Not sure if the firmware update or the end user
did the re-create.
> It has been created slightly smaller - not sure why. Maybe if you explicitly
> request the old per-device size with "--size=975097920" it might get it right.
Thanks. I tried re-creating and specifying the size but it didn't help.
> Are you sure the dm cow devices show exactly the same size and content as the
> originals?
I checked the cow's and they look content and match up exactly size
wise and with mdadm -E.
> The stray '.01' at the end of the version number is not relevant. It just
> indicates a different version of mdadm in use to report the array.
Thanks for clarifying that. I'm searching on ext3 recovery options now.
But certainly any other ideas are most welcome. -Stephen
> NeilBrown
>
>
>>
>> My question is would the version difference explain the array size difference?
>> And is it possible to create a version 00.90.01 array? I do not see
>> that in the mdadm docs.
>>
>> ....original working raid config....
>> /dev/md2:
>> Version : 00.90.01
>> Creation Time : Wed Jun 24 19:00:59 2009
>> Raid Level : raid5
>> Array Size : 2925293760 (2789.78 GiB 2995.50 GB)
>> Device Size : 975097920 (929.93 GiB 998.50 GB)
>> Raid Devices : 4
>> Total Devices : 4
>> Preferred Minor : 2
>> Persistence : Superblock is persistent
>>
>> Update Time : Thu Jun 25 02:36:31 2009
>> State : clean
>> Active Devices : 4
>> Working Devices : 4
>> Failed Devices : 0
>> Spare Devices : 0
>>
>> Layout : left-symmetric
>> Chunk Size : 64K
>>
>> UUID : 6860a291:a5479bc6:e782da22:90dbd792
>> Events : 0.45705
>>
>> Number Major Minor RaidDevice State
>> 0 8 4 0 active sync /dev/sda4
>> 1 8 20 1 active sync /dev/sdb4
>> 2 8 36 2 active sync /dev/sdc4
>> 3 8 52 3 active sync /dev/sdd4
>>
>>
>> ....and here is the raid as it stands now. Note the end user I'm
>> helping tried to rebuild back on Sunday...
>>
>> % mdadm -D /dev/md2
>> /dev/md2:
>> Version : 0.90
>> Creation Time : Sun
>> Raid Level : raid5
>> Array Size : 2925894144 (2790.35 GiB 2996.12 GB)
>> Used Dev Size : 975298048 (930.12 GiB 998.71 GB)
>> Raid Devices : 4
>> Total Devices : 4
>> Preferred Minor : 2
>> Persistence : Superblock is persistent
>>
>> Update Time : Tue Nov 20 16:06:10 2012
>> State : clean
>> Active Devices : 4
>> Working Devices : 4
>> Failed Devices : 0
>> Spare Devices : 0
>>
>> Layout : left-symmetric
>> Chunk Size : 64K
>>
>> UUID : 2ac5bacd:b40dc3f5:cb031839:58437670
>> Events : 0.1
>>
>> Number Major Minor RaidDevice State
>> 0 253 19 0 active sync /dev/dm-19 <<<
>> Note I am using cow devices via dmsetup
>> 1 253 11 1 active sync /dev/dm-11
>> 2 253 15 2 active sync /dev/dm-15
>> 3 253 7 3 active sync /dev/dm-7
>>
>> Thank you for any and all help.
>>
>> Regards,
>> Stephen
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: RAID5 NAS Recovery...00.90.01 vs 00.90
2012-11-21 17:29 ` Stephen Haran
@ 2012-11-22 5:02 ` NeilBrown
2012-12-07 1:07 ` Stephen Haran
0 siblings, 1 reply; 5+ messages in thread
From: NeilBrown @ 2012-11-22 5:02 UTC (permalink / raw)
To: Stephen Haran; +Cc: linux-raid
[-- Attachment #1: Type: text/plain, Size: 2281 bytes --]
On Wed, 21 Nov 2012 12:29:53 -0500 Stephen Haran <steveharan@gmail.com> wrote:
> Thank you Neil for your reply comments below...
>
> On Tue, Nov 20, 2012 at 4:39 PM, NeilBrown <neilb@suse.de> wrote:
> > On Tue, 20 Nov 2012 15:41:57 -0500 Stephen Haran <steveharan@gmail.com> wrote:
> >
> >> Hi, I'm trying to recover a Western Digital Share Space NAS.
> >> I'm able to assemble the RAID5 and restore the LVM but it can't see
> >> any filesystem.
> >>
> >> Below is a raid.log file that shows how the raid was configured when
> >> it was working.
> >> And also the output of mdadm -D showing the raid in it's current state.
> >> Note the Version difference 00.90.01 vs. 0.90. And the array size
> >> difference 2925293760 vs. 2925894144
> >> I'm thinking this difference may be the reason Linux can not see a filesystem.
> >
> > Probably not - losing a few blocks from the end might make 'fsck' complain,
> > but it should still be able to see the filesystem.
> >
> > How did you test if you could see a filesystem? 'mount' or 'fsck -n' ?
>
> Yes I tried both mount and fsck. It can't find the superblock.
> Testdisk finds ext3 partitions but can not see data.
> But looking with hexedit the data appears to still be there.
>
> > It looks like you re-created the array recently (Nov 18 12:07:53 2012) Why
> > did you do that?
>
> The end user attempted a firmware upgrade on the NAS box and could not access
> their data afterwards. Not sure if the firmware update or the end user
> did the re-create.
It looks like the devices have been repartioned. The /dev/sdX4 partitions
are 200Meg smaller than they were before. Quite possible the start of the
partitions has moved as well as the end.
So the array created in these partitions will be quite different to what it
was before.
I don't suppose you have the old partition tables, like you have the old
"mdadm --detail" output?
If not, you need to find where the ext3 superblock is, deduce from that where
the start of the device should be (I don't know if the superblock is at the
start of the device, or in the second block) and then recreate the partition
table.
The X4 partitions need to be at least 128K larger than the "used device size".
Good luck.
NeilBrown
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: RAID5 NAS Recovery...00.90.01 vs 00.90
2012-11-22 5:02 ` NeilBrown
@ 2012-12-07 1:07 ` Stephen Haran
0 siblings, 0 replies; 5+ messages in thread
From: Stephen Haran @ 2012-12-07 1:07 UTC (permalink / raw)
To: NeilBrown; +Cc: linux-raid
Well after much trial and tribulation there is a happy ending to report.
Turns out the device size differences did not seem to matter.
The old raid size from the NAS raid.log file stated...
Version : 00.90.01
Creation Time : Wed Jun 24 19:00:59 2009
Raid Level : raid5
Array Size : 2925293760 (2789.78 GiB 2995.50 GB)
Device Size : 975097920 (929.93 GiB 998.50 GB)
The new raid I created and recovered from is slightly larger...
Version : 0.90.00
UUID : 42a40d1c:d405694b:c97bd3bc:10cb3212 (local to host 7654ozz)
Creation Time : Thu Dec 6 17:45:05 2012
Raid Level : raid5
Used Dev Size : 975298048 (930.12 GiB 998.71 GB)
Array Size : 2925894144 (2790.35 GiB 2996.12 GB)
The complication was the client having created the array in the wrong
order wiping out the good meta data. And also the client not knowing
the correct order of the drives as they originally existed in the NAS.
So in a four drive raid5 there's 24 possible drive combinations and if
you allow for one missing drive add 96 more for a total of 120
different drive combination possibilities to try.
After about 20 tries I hit the right combination. But I still had to
recreate the VG and LVM and repair the ext3 FS with an alternate
superblock.
But in the end 1.5TB of data was recovered. Thank you for your help Neil.
-Stephen Haran
On Thu, Nov 22, 2012 at 12:02 AM, NeilBrown <neilb@suse.de> wrote:
> On Wed, 21 Nov 2012 12:29:53 -0500 Stephen Haran <steveharan@gmail.com> wrote:
>
>> Thank you Neil for your reply comments below...
>>
>> On Tue, Nov 20, 2012 at 4:39 PM, NeilBrown <neilb@suse.de> wrote:
>> > On Tue, 20 Nov 2012 15:41:57 -0500 Stephen Haran <steveharan@gmail.com> wrote:
>> >
>> >> Hi, I'm trying to recover a Western Digital Share Space NAS.
>> >> I'm able to assemble the RAID5 and restore the LVM but it can't see
>> >> any filesystem.
>> >>
>> >> Below is a raid.log file that shows how the raid was configured when
>> >> it was working.
>> >> And also the output of mdadm -D showing the raid in it's current state.
>> >> Note the Version difference 00.90.01 vs. 0.90. And the array size
>> >> difference 2925293760 vs. 2925894144
>> >> I'm thinking this difference may be the reason Linux can not see a filesystem.
>> >
>> > Probably not - losing a few blocks from the end might make 'fsck' complain,
>> > but it should still be able to see the filesystem.
>> >
>> > How did you test if you could see a filesystem? 'mount' or 'fsck -n' ?
>>
>> Yes I tried both mount and fsck. It can't find the superblock.
>> Testdisk finds ext3 partitions but can not see data.
>> But looking with hexedit the data appears to still be there.
>>
>> > It looks like you re-created the array recently (Nov 18 12:07:53 2012) Why
>> > did you do that?
>>
>> The end user attempted a firmware upgrade on the NAS box and could not access
>> their data afterwards. Not sure if the firmware update or the end user
>> did the re-create.
>
> It looks like the devices have been repartioned. The /dev/sdX4 partitions
> are 200Meg smaller than they were before. Quite possible the start of the
> partitions has moved as well as the end.
> So the array created in these partitions will be quite different to what it
> was before.
>
> I don't suppose you have the old partition tables, like you have the old
> "mdadm --detail" output?
>
> If not, you need to find where the ext3 superblock is, deduce from that where
> the start of the device should be (I don't know if the superblock is at the
> start of the device, or in the second block) and then recreate the partition
> table.
> The X4 partitions need to be at least 128K larger than the "used device size".
>
> Good luck.
>
> NeilBrown
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2012-12-07 1:07 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-11-20 20:41 RAID5 NAS Recovery...00.90.01 vs 00.90 Stephen Haran
2012-11-20 21:39 ` NeilBrown
2012-11-21 17:29 ` Stephen Haran
2012-11-22 5:02 ` NeilBrown
2012-12-07 1:07 ` Stephen Haran
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).