linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* rescue an alien md raid5
@ 2009-02-23 18:13 Harry Mangalam
  2009-02-23 18:58 ` Joe Landman
                   ` (2 more replies)
  0 siblings, 3 replies; 11+ messages in thread
From: Harry Mangalam @ 2009-02-23 18:13 UTC (permalink / raw)
  To: linux-raid

Here's an unusual (long) tale of woe.

We had a USRobotics 8700 NAS appliance with 4 SATA disks in RAID5:
 <http://www.usr.com/support/product-template.asp?prod=8700>
which was a fine (if crude) ARM-based Linux NAS until it stroked out 
at some point, leaving us with a degraded RAID5 and comatose NAS 
device.

We'd like to get the files back of course and I've moved the disks to 
a Linux PC, hooked them up to a cheap Silicon Image 4x SATA 
controller and brought up the whole frankenmess with mdadm.  It 
reported a clean but degraded array:

===============================================================

root@pnh-rcs:/# mdadm --detail /dev/md0
/dev/md0:
        Version : 00.90.03
  Creation Time : Wed Feb 14 16:30:17 2007
     Raid Level : raid5
     Array Size : 1464370176 (1396.53 GiB 1499.52 GB)
  Used Dev Size : 488123392 (465.51 GiB 499.84 GB)
   Raid Devices : 4
  Total Devices : 3
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Fri Dec 12 20:26:27 2008
          State : clean, degraded
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           UUID : 7a60cd58:ad85ebdc:3b55d79a:a33c7fe6
         Events : 0.264294

    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       1       8       35        1      active sync   /dev/sdc3
       2       8       51        2      active sync   /dev/sdd3
       3       8       67        3      active sync   /dev/sde3
===============================================================


The original 500G Maxtor disks were formatted in 3 partitions as 
follows:

(for /dev/sd[bcde])
disk sdb was bad so I had to replace it.

===============================================================
Disk /dev/sdc: 500.1 GB, 500107862016 bytes
16 heads, 63 sectors/track, 969021 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1         261      131543+  83  Linux
/dev/sdc2             262         522      131544   82  Linux swap / 
Solaris
/dev/sdc3             523      969022   488123496+  89  Unknown
===============================================================


I formatted the replacement (different make/layout - Seagate) as a 
single partition:
/dev/sdb1:
===============================================================
Disk /dev/sdb: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x21d01216

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1       60801   488384001   83  Linux
===============================================================

and tried to rebuild the raid by stopping the raid, removing the bad 
disk, adding the new disk.  It came up and reported that it was 
rebuilding.  After several hours, it rebuilt and reported itself 
clean (altho during a reboot, it became /dev/md1 instead of md0)

===============================================================
$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] 
[raid4] [raid10]
md1 : active raid5 sdb1[0] sde3[3] sdd3[2] sdc3[1]
      1464370176 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
===============================================================


===============================================================
$ mdadm --detail /dev/md1
/dev/md1:
        Version : 00.90.03
  Creation Time : Wed Feb 14 16:30:17 2007
     Raid Level : raid5
     Array Size : 1464370176 (1396.53 GiB 1499.52 GB)
  Used Dev Size : 488123392 (465.51 GiB 499.84 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Mon Feb 23 09:06:27 2009
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           UUID : 7a60cd58:ad85ebdc:3b55d79a:a33c7fe6
         Events : 0.265494

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       35        1      active sync   /dev/sdc3
       2       8       51        2      active sync   /dev/sdd3
       3       8       67        3      active sync   /dev/sde3
===============================================================


The docs and files on the USR web site imply that the native 
filesystem was originally XFS, but when i try to mount it as such, I 
can't:

mount -vvv -t xfs /dev/md1 /mnt
mount: fstab path: "/etc/fstab"
mount: lock path:  "/etc/mtab~"
mount: temp path:  "/etc/mtab.tmp"
mount: no LABEL=, no UUID=, going to mount /dev/md1 by path
mount: spec:  "/dev/md1"
mount: node:  "/mnt"
mount: types: "xfs"
mount: opts:  "(null)"
mount: mount(2) syscall: source: "/dev/md1", target: "/mnt", 
filesystemtype: "xfs", mountflags: -1058209792, data: (null)
mount: wrong fs type, bad option, bad superblock on /dev/md1,
       missing codepage or helper program, or other error
       In some cases useful info is found in syslog - try
       dmesg | tail  or so

and when I check dmesg:
[  245.008000] SGI XFS with ACLs, security attributes, realtime, large 
block numbers, no debug enabled
[  245.020000] SGI XFS Quota Management subsystem
[  245.020000] XFS: SB read failed
[  327.696000] md: md0 stopped.
[  327.696000] md: unbind<sdc1>
[  327.696000] md: export_rdev(sdc1)
[  327.696000] md: unbind<sde1>
[  327.696000] md: export_rdev(sde1)
[  327.696000] md: unbind<sdd1>
[  327.696000] md: export_rdev(sdd1)
[  439.660000] XFS: bad magic number
[  439.660000] XFS: SB validate failed

repeated attempts repeat the last 2 lines above.  This implies that 
the superblock is bad and xfs_repair also reports that:
xfs_repair /dev/md1
        - creating 2 worker thread(s)
Phase 1 - find and verify superblock...
bad primary superblock - bad magic number !!!

attempting to find secondary superblock...
...... <lots of ...>  ... 
..found candidate secondary superblock...
unable to verify superblock, continuing...
<lots of ...>  ... 
...found candidate secondary superblock...
unable to verify superblock, continuing...
<lots of ...>  ... 


So my question is what should I do now?  Were those 1st 2 partitions 
(that I didn't create on the replacement disk) important?  Should I 
try to remove the replaced disk, create 3 partitions,  and try again, 
or am I just well and truly hosed?

-- 
Harry Mangalam - Research Computing, NACS, E2148, Engineering Gateway, 
UC Irvine 92697  949 824-0084(o), 949 285-4487(c)
---
Good judgment comes from experience; 
Experience comes from bad judgment. [F. Brooks.]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: rescue an alien md raid5
  2009-02-23 18:13 rescue an alien md raid5 Harry Mangalam
@ 2009-02-23 18:58 ` Joe Landman
  2009-02-23 19:59   ` Harry Mangalam
  2009-02-23 19:14 ` NeilBrown
  2009-02-24  0:56 ` hgichon
  2 siblings, 1 reply; 11+ messages in thread
From: Joe Landman @ 2009-02-23 18:58 UTC (permalink / raw)
  To: Harry Mangalam; +Cc: linux-raid

Harry Mangalam wrote:
> Here's an unusual (long) tale of woe.
> 
> We had a USRobotics 8700 NAS appliance with 4 SATA disks in RAID5:
>  <http://www.usr.com/support/product-template.asp?prod=8700>
> which was a fine (if crude) ARM-based Linux NAS until it stroked out 
> at some point, leaving us with a degraded RAID5 and comatose NAS 
> device.
> 
> We'd like to get the files back of course and I've moved the disks to 
> a Linux PC, hooked them up to a cheap Silicon Image 4x SATA 
> controller and brought up the whole frankenmess with mdadm.  It 
> reported a clean but degraded array:
> 
> ===============================================================
> 
> root@pnh-rcs:/# mdadm --detail /dev/md0
> /dev/md0:
>         Version : 00.90.03

> The docs and files on the USR web site imply that the native 
> filesystem was originally XFS, but when i try to mount it as such, I 
> can't:
> 
> mount -vvv -t xfs /dev/md1 /mnt
> mount: fstab path: "/etc/fstab"
> mount: lock path:  "/etc/mtab~"
> mount: temp path:  "/etc/mtab.tmp"
> mount: no LABEL=, no UUID=, going to mount /dev/md1 by path
> mount: spec:  "/dev/md1"
> mount: node:  "/mnt"
> mount: types: "xfs"
> mount: opts:  "(null)"
> mount: mount(2) syscall: source: "/dev/md1", target: "/mnt", 
> filesystemtype: "xfs", mountflags: -1058209792, data: (null)
> mount: wrong fs type, bad option, bad superblock on /dev/md1,
>        missing codepage or helper program, or other error
>        In some cases useful info is found in syslog - try
>        dmesg | tail  or so

Hmmm... is it possible that the journal is external for the XFS 
filesystem in question?  Could you try

	mount -o norecovery,ro /dev/md1 /mountpoint

Otherwise, could you dd the file system off there onto another (large) 
partition, before you try xfs_repair





-- 
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics LLC,
email: landman@scalableinformatics.com
web  : http://www.scalableinformatics.com
        http://jackrabbit.scalableinformatics.com
phone: +1 734 786 8423 x121
fax  : +1 866 888 3112
cell : +1 734 612 4615

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: rescue an alien md raid5
  2009-02-23 18:13 rescue an alien md raid5 Harry Mangalam
  2009-02-23 18:58 ` Joe Landman
@ 2009-02-23 19:14 ` NeilBrown
  2009-02-23 20:04   ` Harry Mangalam
  2009-03-02 11:22   ` Nagilum
  2009-02-24  0:56 ` hgichon
  2 siblings, 2 replies; 11+ messages in thread
From: NeilBrown @ 2009-02-23 19:14 UTC (permalink / raw)
  To: Harry Mangalam; +Cc: linux-raid

On Tue, February 24, 2009 5:13 am, Harry Mangalam wrote:
> Here's an unusual (long) tale of woe.
>
> We had a USRobotics 8700 NAS appliance with 4 SATA disks in RAID5:
>  <http://www.usr.com/support/product-template.asp?prod=8700>
> which was a fine (if crude) ARM-based Linux NAS until it stroked out
> at some point, leaving us with a degraded RAID5 and comatose NAS
> device.
>
> We'd like to get the files back of course and I've moved the disks to
> a Linux PC, hooked them up to a cheap Silicon Image 4x SATA
> controller and brought up the whole frankenmess with mdadm.  It
> reported a clean but degraded array:

Isn't it nice that it was Linux inside that box, rather than some
proprietary OS with some undocumented raid metadata....



>
> The docs and files on the USR web site imply that the native
> filesystem was originally XFS, but when i try to mount it as such, I
> can't:

I heard Dave Chinner talking about this during LCA-2009.
If I remember correctly, there is something a bit funny about structure
layout and padding on the ARM and it affects XFS is some strange way,
and some NAS vendors 'fixed' it the wrong way, so they are incompatible
with mainline..... or something like that.  What I really remember is
ARM + XFS + NAS == BAD Vendor

I suggest asking at xfs@oss.sgi.com

No, those other partitions are relevant.  One was clearly for
swap.  The other was probably /boot.

Good luck,
NeilBrown


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: rescue an alien md raid5
  2009-02-23 18:58 ` Joe Landman
@ 2009-02-23 19:59   ` Harry Mangalam
  0 siblings, 0 replies; 11+ messages in thread
From: Harry Mangalam @ 2009-02-23 19:59 UTC (permalink / raw)
  To: landman; +Cc: linux-raid

Hi Joe,

Thanks for the reply and the hint, but the norecovery option fails due 
to the superblock not being found.  Neil's comment seems to indicate 
that there's a basic incompatibility in the way the NAS vendors 
implemented XFS.  We may be hosed on this one (not my NAS/files - I'm 
just trying to help some others in the group).  I will try the XFS 
group tho.

The mdadm part came thru like a champ tho - thanks!

Another reason to stay away from these kind of gadgets (I actually 
tried this device when it first came out and abandoned that approach 
as too futzy, not nearly flexible enough and not administratable 
enough).  As well, with all the big P3/p4 towers being given away for 
free now, I can build one of these in an hour or so with one of the 
SilImage cards and get a much more useful box for essentially the 
cost of raw disk.

Harry

On Monday 23 February 2009, Joe Landman wrote:
> Harry Mangalam wrote:
> > Here's an unusual (long) tale of woe.

> Hmmm... is it possible that the journal is external for the XFS
> filesystem in question?  Could you try
>
> 	mount -o norecovery,ro /dev/md1 /mountpoint
>
> Otherwise, could you dd the file system off there onto another
> (large) partition, before you try xfs_repair



-- 
Harry Mangalam - Research Computing, NACS, E2148, Engineering Gateway, 
UC Irvine 92697  949 824-0084(o), 949 285-4487(c)
---
Good judgment comes from experience; 
Experience comes from bad judgment. [F. Brooks.]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: rescue an alien md raid5
  2009-02-23 19:14 ` NeilBrown
@ 2009-02-23 20:04   ` Harry Mangalam
  2009-03-02 11:22   ` Nagilum
  1 sibling, 0 replies; 11+ messages in thread
From: Harry Mangalam @ 2009-02-23 20:04 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-raid

On Monday 23 February 2009, NeilBrown wrote:
> On Tue, February 24, 2009 5:13 am, Harry Mangalam wrote:
> > Here's an unusual (long) tale of woe.
> >
> > We had a USRobotics 8700 NAS appliance with 4 SATA disks in
> > RAID5:
> > <http://www.usr.com/support/product-template.asp?prod=8700> which
> > was a fine (if crude) ARM-based Linux NAS until it stroked out at
> > some point, leaving us with a degraded RAID5 and comatose NAS
> > device.
> >
> > We'd like to get the files back of course and I've moved the
> > disks to a Linux PC, hooked them up to a cheap Silicon Image 4x
> > SATA controller and brought up the whole frankenmess with mdadm. 
> > It reported a clean but degraded array:
>
> Isn't it nice that it was Linux inside that box, rather than some
> proprietary OS with some undocumented raid metadata....

And that it used mdadm.. - /IT/ worked perfectly.

> > The docs and files on the USR web site imply that the native
> > filesystem was originally XFS, but when i try to mount it as
> > such, I can't:
>
> I heard Dave Chinner talking about this during LCA-2009.
> If I remember correctly, there is something a bit funny about
> structure layout and padding on the ARM and it affects XFS is some
> strange way, and some NAS vendors 'fixed' it the wrong way, so they
> are incompatible with mainline..... or something like that.  What I
> really remember is ARM + XFS + NAS == BAD Vendor
>
> I suggest asking at xfs@oss.sgi.com

Thanks - I'll go over and bug them a bit before I give up.

>
> No, those other partitions are relevant.  One was clearly for
> swap.  The other was probably /boot.

Yes, I now remember that it did provide a small /boot (+ a bit of OS 
space).  I added some utils there while I was testing it before I 
decided it was too much effort for the return.
Thanks very much for the note.

-- 
Harry Mangalam - Research Computing, NACS, E2148, Engineering Gateway, 
UC Irvine 92697  949 824-0084(o), 949 285-4487(c)
---
Good judgment comes from experience; 
Experience comes from bad judgment. [F. Brooks.]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: rescue an alien md raid5
  2009-02-23 18:13 rescue an alien md raid5 Harry Mangalam
  2009-02-23 18:58 ` Joe Landman
  2009-02-23 19:14 ` NeilBrown
@ 2009-02-24  0:56 ` hgichon
  2009-02-24 21:11   ` Harry Mangalam
  2 siblings, 1 reply; 11+ messages in thread
From: hgichon @ 2009-02-24  0:56 UTC (permalink / raw)
  To: Harry Mangalam; +Cc: linux-raid


I read USRobotics 8700 feature.

# Supports linear storage use (all drives as one logical drive) or use 
multiple drives as primary and back-up storage
# Dynamic logical drive sizing - Add an additional drive, when needed, and 
it will be integrated into the logical drive with no effect on data

maybe... there is no lvm layer?
try vgscan / vgchange -ay

best regards.

-kpkim

Harry Mangalam wrote:
> Here's an unusual (long) tale of woe.
>
> We had a USRobotics 8700 NAS appliance with 4 SATA disks in RAID5:
>  <http://www.usr.com/support/product-template.asp?prod=8700>
> which was a fine (if crude) ARM-based Linux NAS until it stroked out 
> at some point, leaving us with a degraded RAID5 and comatose NAS 
> device.
>
> We'd like to get the files back of course and I've moved the disks to 
> a Linux PC, hooked them up to a cheap Silicon Image 4x SATA 
> controller and brought up the whole frankenmess with mdadm.  It 
> reported a clean but degraded array:
>
> ===============================================================
>
> root@pnh-rcs:/# mdadm --detail /dev/md0
> /dev/md0:
>         Version : 00.90.03
>   Creation Time : Wed Feb 14 16:30:17 2007
>      Raid Level : raid5
>      Array Size : 1464370176 (1396.53 GiB 1499.52 GB)
>   Used Dev Size : 488123392 (465.51 GiB 499.84 GB)
>    Raid Devices : 4
>   Total Devices : 3
> Preferred Minor : 0
>     Persistence : Superblock is persistent
>
>     Update Time : Fri Dec 12 20:26:27 2008
>           State : clean, degraded
>  Active Devices : 3
> Working Devices : 3
>  Failed Devices : 0
>   Spare Devices : 0
>
>          Layout : left-symmetric
>      Chunk Size : 64K
>
>            UUID : 7a60cd58:ad85ebdc:3b55d79a:a33c7fe6
>          Events : 0.264294
>
>     Number   Major   Minor   RaidDevice State
>        0       0        0        0      removed
>        1       8       35        1      active sync   /dev/sdc3
>        2       8       51        2      active sync   /dev/sdd3
>        3       8       67        3      active sync   /dev/sde3
> ===============================================================
>
>
> The original 500G Maxtor disks were formatted in 3 partitions as 
> follows:
>
> (for /dev/sd[bcde])
> disk sdb was bad so I had to replace it.
>
> ===============================================================
> Disk /dev/sdc: 500.1 GB, 500107862016 bytes
> 16 heads, 63 sectors/track, 969021 cylinders
> Units = cylinders of 1008 * 512 = 516096 bytes
> Disk identifier: 0x00000000
>
>    Device Boot      Start         End      Blocks   Id  System
> /dev/sdc1               1         261      131543+  83  Linux
> /dev/sdc2             262         522      131544   82  Linux swap / 
> Solaris
> /dev/sdc3             523      969022   488123496+  89  Unknown
> ===============================================================
>
>
> I formatted the replacement (different make/layout - Seagate) as a 
> single partition:
> /dev/sdb1:
> ===============================================================
> Disk /dev/sdb: 500.1 GB, 500107862016 bytes
> 255 heads, 63 sectors/track, 60801 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> Disk identifier: 0x21d01216
>
>    Device Boot      Start         End      Blocks   Id  System
> /dev/sdb1               1       60801   488384001   83  Linux
> ===============================================================
>
> and tried to rebuild the raid by stopping the raid, removing the bad 
> disk, adding the new disk.  It came up and reported that it was 
> rebuilding.  After several hours, it rebuilt and reported itself 
> clean (altho during a reboot, it became /dev/md1 instead of md0)
>
> ===============================================================
> $ cat /proc/mdstat
> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] 
> [raid4] [raid10]
> md1 : active raid5 sdb1[0] sde3[3] sdd3[2] sdc3[1]
>       1464370176 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
> ===============================================================
>
>
> ===============================================================
> $ mdadm --detail /dev/md1
> /dev/md1:
>         Version : 00.90.03
>   Creation Time : Wed Feb 14 16:30:17 2007
>      Raid Level : raid5
>      Array Size : 1464370176 (1396.53 GiB 1499.52 GB)
>   Used Dev Size : 488123392 (465.51 GiB 499.84 GB)
>    Raid Devices : 4
>   Total Devices : 4
> Preferred Minor : 1
>     Persistence : Superblock is persistent
>
>     Update Time : Mon Feb 23 09:06:27 2009
>           State : clean
>  Active Devices : 4
> Working Devices : 4
>  Failed Devices : 0
>   Spare Devices : 0
>
>          Layout : left-symmetric
>      Chunk Size : 64K
>
>            UUID : 7a60cd58:ad85ebdc:3b55d79a:a33c7fe6
>          Events : 0.265494
>
>     Number   Major   Minor   RaidDevice State
>        0       8       17        0      active sync   /dev/sdb1
>        1       8       35        1      active sync   /dev/sdc3
>        2       8       51        2      active sync   /dev/sdd3
>        3       8       67        3      active sync   /dev/sde3
> ===============================================================
>
>
> The docs and files on the USR web site imply that the native 
> filesystem was originally XFS, but when i try to mount it as such, I 
> can't:
>
> mount -vvv -t xfs /dev/md1 /mnt
> mount: fstab path: "/etc/fstab"
> mount: lock path:  "/etc/mtab~"
> mount: temp path:  "/etc/mtab.tmp"
> mount: no LABEL=, no UUID=, going to mount /dev/md1 by path
> mount: spec:  "/dev/md1"
> mount: node:  "/mnt"
> mount: types: "xfs"
> mount: opts:  "(null)"
> mount: mount(2) syscall: source: "/dev/md1", target: "/mnt", 
> filesystemtype: "xfs", mountflags: -1058209792, data: (null)
> mount: wrong fs type, bad option, bad superblock on /dev/md1,
>        missing codepage or helper program, or other error
>        In some cases useful info is found in syslog - try
>        dmesg | tail  or so
>
> and when I check dmesg:
> [  245.008000] SGI XFS with ACLs, security attributes, realtime, large 
> block numbers, no debug enabled
> [  245.020000] SGI XFS Quota Management subsystem
> [  245.020000] XFS: SB read failed
> [  327.696000] md: md0 stopped.
> [  327.696000] md: unbind<sdc1>
> [  327.696000] md: export_rdev(sdc1)
> [  327.696000] md: unbind<sde1>
> [  327.696000] md: export_rdev(sde1)
> [  327.696000] md: unbind<sdd1>
> [  327.696000] md: export_rdev(sdd1)
> [  439.660000] XFS: bad magic number
> [  439.660000] XFS: SB validate failed
>
> repeated attempts repeat the last 2 lines above.  This implies that 
> the superblock is bad and xfs_repair also reports that:
> xfs_repair /dev/md1
>         - creating 2 worker thread(s)
> Phase 1 - find and verify superblock...
> bad primary superblock - bad magic number !!!
>
> attempting to find secondary superblock...
> ...... <lots of ...>  ... 
> ..found candidate secondary superblock...
> unable to verify superblock, continuing...
> <lots of ...>  ... 
> ...found candidate secondary superblock...
> unable to verify superblock, continuing...
> <lots of ...>  ... 
>
>
> So my question is what should I do now?  Were those 1st 2 partitions 
> (that I didn't create on the replacement disk) important?  Should I 
> try to remove the replaced disk, create 3 partitions,  and try again, 
> or am I just well and truly hosed?
>
>   


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: rescue an alien md raid5
  2009-02-24  0:56 ` hgichon
@ 2009-02-24 21:11   ` Harry Mangalam
  2009-03-14 14:12     ` Ryan Wagoner
  0 siblings, 1 reply; 11+ messages in thread
From: Harry Mangalam @ 2009-02-24 21:11 UTC (permalink / raw)
  To: hgichon; +Cc: linux-raid

Hi kpkim,

You mean that the system needs an lvm/lvm2 layer to properly interpret 
the underlying filesystem?  

Possible, but the USR OSS list:
<http://www.usr.com/support/gpl/8700-OpenSourcesList-0.63_USR.xls>
doesn't list lvm as part of their implementation.  Would mdadm have 
been able to detect this?

Harry


On Monday 23 February 2009, hgichon wrote:
> I read USRobotics 8700 feature.
>
> # Supports linear storage use (all drives as one logical drive) or
> use multiple drives as primary and back-up storage
> # Dynamic logical drive sizing - Add an additional drive, when
> needed, and it will be integrated into the logical drive with no
> effect on data
>
> maybe... there is no lvm layer?
> try vgscan / vgchange -ay
>
> best regards.



-- 
Harry Mangalam - Research Computing, NACS, E2148, Engineering Gateway, 
UC Irvine 92697  949 824-0084(o), 949 285-4487(c)
---
Good judgment comes from experience; 
Experience comes from bad judgment. [F. Brooks.]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: rescue an alien md raid5
  2009-02-23 19:14 ` NeilBrown
  2009-02-23 20:04   ` Harry Mangalam
@ 2009-03-02 11:22   ` Nagilum
  2009-03-02 16:57     ` Harry Mangalam
  1 sibling, 1 reply; 11+ messages in thread
From: Nagilum @ 2009-03-02 11:22 UTC (permalink / raw)
  To: Harry Mangalam; +Cc: linux-raid

>> The docs and files on the USR web site imply that the native
>> filesystem was originally XFS, but when i try to mount it as such, I
>> can't:
>
> I heard Dave Chinner talking about this during LCA-2009.
> If I remember correctly, there is something a bit funny about structure
> layout and padding on the ARM and it affects XFS is some strange way,
> and some NAS vendors 'fixed' it the wrong way, so they are incompatible
> with mainline..... or something like that.  What I really remember is
> ARM + XFS + NAS == BAD Vendor
>
> I suggest asking at xfs@oss.sgi.com
>
> No, those other partitions are relevant.  One was clearly for
> swap.  The other was probably /boot.

Maybe it can be salvaged using QEMU?
There are even preconfigured ARM environments available:
http://www.scratchbox.org/




========================================================================
#    _  __          _ __     http://www.nagilum.org/ \n icq://69646724 #
#   / |/ /__ ____ _(_) /_ ____ _  nagilum@nagilum.org \n +491776461165 #
#  /    / _ `/ _ `/ / / // /  ' \  Amiga (68k/PPC): AOS/NetBSD/Linux   #
# /_/|_/\_,_/\_, /_/_/\_,_/_/_/_/   Mac (PPC): MacOS-X / NetBSD /Linux #
#           /___/     x86: FreeBSD/Linux/Solaris/Win2k  ARM9: EPOC EV6 #
========================================================================


----------------------------------------------------------------
cakebox.homeunix.net - all the machine one needs..

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: rescue an alien md raid5
  2009-03-02 11:22   ` Nagilum
@ 2009-03-02 16:57     ` Harry Mangalam
  0 siblings, 0 replies; 11+ messages in thread
From: Harry Mangalam @ 2009-03-02 16:57 UTC (permalink / raw)
  To: Nagilum; +Cc: linux-raid

that's a good idea...if we can subvert the NAS box to boot an 
alternative system.  There's no CD, but there is a USB port, so we 
could try to boot a USB ARM distro.  One problem with this is that 
there's no video system so it's all blind until we can log into it, 
unless there's some hidden video tap on the mobo.

I'll check into that - thank you for the idea.

Harry

On Monday 02 March 2009, Nagilum wrote:
> >> The docs and files on the USR web site imply that the native
> >> filesystem was originally XFS, but when i try to mount it as
> >> such, I can't:
> >
> > I heard Dave Chinner talking about this during LCA-2009.
> > If I remember correctly, there is something a bit funny about
> > structure layout and padding on the ARM and it affects XFS is
> > some strange way, and some NAS vendors 'fixed' it the wrong way,
> > so they are incompatible with mainline..... or something like
> > that.  What I really remember is ARM + XFS + NAS == BAD Vendor
> >
> > I suggest asking at xfs@oss.sgi.com
> >
> > No, those other partitions are relevant.  One was clearly for
> > swap.  The other was probably /boot.
>
> Maybe it can be salvaged using QEMU?
> There are even preconfigured ARM environments available:
> http://www.scratchbox.org/
>
>
>
>
> ===================================================================
>===== #    _  __          _ __     http://www.nagilum.org/ \n
> icq://69646724 # #   / |/ /__ ____ _(_) /_ ____ _ 
> nagilum@nagilum.org \n +491776461165 # #  /    / _ `/ _ `/ / / // /
>  ' \  Amiga (68k/PPC): AOS/NetBSD/Linux   # # /_/|_/\_,_/\_,
> /_/_/\_,_/_/_/_/   Mac (PPC): MacOS-X / NetBSD /Linux # #          
> /___/     x86: FreeBSD/Linux/Solaris/Win2k  ARM9: EPOC EV6 #
> ===================================================================
>=====
>
>
> ----------------------------------------------------------------
> cakebox.homeunix.net - all the machine one needs..



-- 
Harry Mangalam - Research Computing, NACS, E2148, Engineering Gateway, 
UC Irvine 92697  949 824-0084(o), 949 285-4487(c)
---
Good judgment comes from experience; 
Experience comes from bad judgment. [F. Brooks.]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: rescue an alien md raid5
  2009-02-24 21:11   ` Harry Mangalam
@ 2009-03-14 14:12     ` Ryan Wagoner
  2009-03-14 18:15       ` Harry Mangalam
  0 siblings, 1 reply; 11+ messages in thread
From: Ryan Wagoner @ 2009-03-14 14:12 UTC (permalink / raw)
  To: Harry Mangalam; +Cc: hgichon, linux-raid

No LVM has no correlation with mdadm. LVM stands for logical volume
manager and lets you dynamically partition space on top of a real
partition. Try running the commands below. If LVM is not used it won't
affect the data on the disk. If it is used then your problem is
resolved.

On Tue, Feb 24, 2009 at 4:11 PM, Harry Mangalam <harry.mangalam@uci.edu> wrote:
> Hi kpkim,
>
> You mean that the system needs an lvm/lvm2 layer to properly interpret
> the underlying filesystem?
>
> Possible, but the USR OSS list:
> <http://www.usr.com/support/gpl/8700-OpenSourcesList-0.63_USR.xls>
> doesn't list lvm as part of their implementation.  Would mdadm have
> been able to detect this?
>
> Harry
>
>
> On Monday 23 February 2009, hgichon wrote:
>> I read USRobotics 8700 feature.
>>
>> # Supports linear storage use (all drives as one logical drive) or
>> use multiple drives as primary and back-up storage
>> # Dynamic logical drive sizing - Add an additional drive, when
>> needed, and it will be integrated into the logical drive with no
>> effect on data
>>
>> maybe... there is no lvm layer?
>> try vgscan / vgchange -ay
>>
>> best regards.
>
>
>
> --
> Harry Mangalam - Research Computing, NACS, E2148, Engineering Gateway,
> UC Irvine 92697  949 824-0084(o), 949 285-4487(c)
> ---
> Good judgment comes from experience;
> Experience comes from bad judgment. [F. Brooks.]
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: rescue an alien md raid5
  2009-03-14 14:12     ` Ryan Wagoner
@ 2009-03-14 18:15       ` Harry Mangalam
  0 siblings, 0 replies; 11+ messages in thread
From: Harry Mangalam @ 2009-03-14 18:15 UTC (permalink / raw)
  To: Ryan Wagoner; +Cc: hgichon, linux-raid

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=utf-8, Size: 2800 bytes --]

I mentioned the OSS layer because if LVM was involved, it would have 
been included in the GPL'ed SW listing.

As noted, there's no LVM software in the GPL listing, and none of the 
commands were found on the NAS device - it can be telnet'ed into, but 
has a limited set of tools available (busybox and not much else 
besides mdadm)

I've since found out that the data on the RAID was a set of VM images 
that were recoverable via other means so I'm stopping my efforts to 
restore it.  

It was an interesting diversion tho and I thank all of the comments 
and suggestions.  I've learned a lot.

Harry


On Saturday 14 March 2009, Ryan Wagoner wrote:
> No LVM has no correlation with mdadm. LVM stands for logical volume
> manager and lets you dynamically partition space on top of a real
> partition. Try running the commands below. If LVM is not used it
> won't affect the data on the disk. If it is used then your problem
> is resolved.
>
> On Tue, Feb 24, 2009 at 4:11 PM, Harry Mangalam 
<harry.mangalam@uci.edu> wrote:
> > Hi kpkim,
> >
> > You mean that the system needs an lvm/lvm2 layer to properly
> > interpret the underlying filesystem?
> >
> > Possible, but the USR OSS list:
> > <http://www.usr.com/support/gpl/8700-OpenSourcesList-0.63_USR.xls
> >> doesn't list lvm as part of their implementation.  Would mdadm
> > have been able to detect this?
> >
> > Harry
> >
> > On Monday 23 February 2009, hgichon wrote:
> >> I read USRobotics 8700 feature.
> >>
> >> # Supports linear storage use (all drives as one logical drive)
> >> or use multiple drives as primary and back-up storage
> >> # Dynamic logical drive sizing - Add an additional drive, when
> >> needed, and it will be integrated into the logical drive with no
> >> effect on data
> >>
> >> maybe... there is no lvm layer?
> >> try vgscan / vgchange -ay
> >>
> >> best regards.
> >
> > --
> > Harry Mangalam - Research Computing, NACS, E2148, Engineering
> > Gateway, UC Irvine 92697  949 824-0084(o), 949 285-4487(c)
> > ---
> > Good judgment comes from experience;
> > Experience comes from bad judgment. [F. Brooks.]
> > --
> > To unsubscribe from this list: send the line "unsubscribe
> > linux-raid" in the body of a message to majordomo@vger.kernel.org
> > More majordomo info at
> >  http://vger.kernel.org/majordomo-info.html



-- 
Harry Mangalam - Research Computing, NACS, E2148, Engineering Gateway, 
UC Irvine 92697  949 824-0084(o), 949 285-4487(c)
---
Good judgment comes from experience; 
Experience comes from bad judgment. [F. Brooks.]
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2009-03-14 18:15 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-02-23 18:13 rescue an alien md raid5 Harry Mangalam
2009-02-23 18:58 ` Joe Landman
2009-02-23 19:59   ` Harry Mangalam
2009-02-23 19:14 ` NeilBrown
2009-02-23 20:04   ` Harry Mangalam
2009-03-02 11:22   ` Nagilum
2009-03-02 16:57     ` Harry Mangalam
2009-02-24  0:56 ` hgichon
2009-02-24 21:11   ` Harry Mangalam
2009-03-14 14:12     ` Ryan Wagoner
2009-03-14 18:15       ` Harry Mangalam

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).