linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Raid Problem - Unknown File System Type
@ 2011-11-09 16:12 William Colls
  2011-11-09 16:39 ` Robin Hill
  2011-11-09 17:07 ` Gordon Henderson
  0 siblings, 2 replies; 16+ messages in thread
From: William Colls @ 2011-11-09 16:12 UTC (permalink / raw)
  To: linux-raid@vger.kernel.org


Environment

Kubuntu Linux 10.04.3 LTS
mdadm 2.6.7.1-1ubuntu15

I have two identical disks that were in a raid configuration in another 
machine (also running 10.04). I removed them from the old machine, 
mounted them in a new machine, booted up, and at a terminal prompt as 
root issued

mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb /dev/sdc

The configuration in the old machine was raid 1.

I checked the contents of /proc/mdstat and it confirmed that md0 was 
indeed running, with 2 devices, as expected. But it also said it was 
resyncing the disks, which I didn't expect.

When the reync completed, I was unable to mount /dev/md0p1. Specifying 
-t ext3 in the mount command gives the error message "wrong fs, bad 
option, bad superblock on /dev/md0p1". Trying mount with no -t gives the 
error "unknown file type linux_raid_member". Looking at the disks with 
Gparted, confims that the system sees the disks, but the filesystem 
shows as unknown.

The output from mdamd --detail /dev/sdb

/dev/sdb:
           Magic : a92b4efc
         Version : 00.90.00
            UUID : 1443e74d:f63f16ab:d527ef8c:7225e0b0
   Creation Time : Tue Nov  8 13:14:48 2011
      Raid Level : raid1
   Used Dev Size : 732574464 (698.64 GiB 750.16 GB)
      Array Size : 732574464 (698.64 GiB 750.16 GB)
    Raid Devices : 2
   Total Devices : 2
Preferred Minor : 0

     Update Time : Tue Nov  8 16:05:42 2011
           State : clean
  Active Devices : 2
Working Devices : 2
  Failed Devices : 0
   Spare Devices : 0
        Checksum : c4195c85 - correct
          Events : 34


       Number   Major   Minor   RaidDevice State
this     0       8       16        0      active sync   /dev/sdb

    0     0       8       16        0      active sync   /dev/sdb
    1     1       8       32        1      active sync   /dev/sdc

--- end of output

Output from mdadm --examine /dev/sdc

/dev/sdc:
           Magic : a92b4efc
         Version : 00.90.00
            UUID : 1443e74d:f63f16ab:d527ef8c:7225e0b0
   Creation Time : Tue Nov  8 13:14:48 2011
      Raid Level : raid1
   Used Dev Size : 732574464 (698.64 GiB 750.16 GB)
      Array Size : 732574464 (698.64 GiB 750.16 GB)
    Raid Devices : 2
   Total Devices : 2
Preferred Minor : 0

     Update Time : Tue Nov  8 16:05:42 2011
           State : clean
  Active Devices : 2
Working Devices : 2
  Failed Devices : 0
   Spare Devices : 0
        Checksum : c4195c97 - correct
          Events : 34


       Number   Major   Minor   RaidDevice State
this     1       8       32        1      active sync   /dev/sdc

    0     0       8       16        0      active sync   /dev/sdb
    1     1       8       32        1      active sync   /dev/sdc

---- end of output

So - am I truly up the creek without a paddle? Is there any way to 
recover this array? I have backups of most of it, but it will take a 
while to find and restore. And for sure something will be lost.

Thanks for your time.

-- 
I know you believe that you understand what you think I said, but I am 
not sure that you realize that what you heard was not what I ment.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Raid Problem - Unknown File System Type
  2011-11-09 16:12 Raid Problem - Unknown File System Type William Colls
@ 2011-11-09 16:39 ` Robin Hill
  2011-11-09 17:12   ` William Colls
  2011-11-09 17:07 ` Gordon Henderson
  1 sibling, 1 reply; 16+ messages in thread
From: Robin Hill @ 2011-11-09 16:39 UTC (permalink / raw)
  To: William Colls; +Cc: linux-raid@vger.kernel.org

[-- Attachment #1: Type: text/plain, Size: 3932 bytes --]

On Wed Nov 09, 2011 at 11:12:59AM -0500, William Colls wrote:

> 
> Environment
> 
> Kubuntu Linux 10.04.3 LTS
> mdadm 2.6.7.1-1ubuntu15
> 
> I have two identical disks that were in a raid configuration in another 
> machine (also running 10.04). I removed them from the old machine, 
> mounted them in a new machine, booted up, and at a terminal prompt as 
> root issued
> 
> mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb /dev/sdc
> 
You should have just assembled them. You've now created a new array
instead of just assembling the old one.

> The configuration in the old machine was raid 1.
> 
> I checked the contents of /proc/mdstat and it confirmed that md0 was 
> indeed running, with 2 devices, as expected. But it also said it was 
> resyncing the disks, which I didn't expect.
> 
> When the reync completed, I was unable to mount /dev/md0p1. Specifying 
> -t ext3 in the mount command gives the error message "wrong fs, bad 
> option, bad superblock on /dev/md0p1". Trying mount with no -t gives the 
> error "unknown file type linux_raid_member". Looking at the disks with 
> Gparted, confims that the system sees the disks, but the filesystem 
> shows as unknown.
> 
It would look like the old array was either created with an older mdadm
version (with different defaults) or used some non-default parameter
values.

> The output from mdamd --detail /dev/sdb
> 
> /dev/sdb:
>            Magic : a92b4efc
>          Version : 00.90.00
>             UUID : 1443e74d:f63f16ab:d527ef8c:7225e0b0
>    Creation Time : Tue Nov  8 13:14:48 2011
>       Raid Level : raid1
>    Used Dev Size : 732574464 (698.64 GiB 750.16 GB)
>       Array Size : 732574464 (698.64 GiB 750.16 GB)
>     Raid Devices : 2
>    Total Devices : 2
> Preferred Minor : 0
> 
>      Update Time : Tue Nov  8 16:05:42 2011
>            State : clean
>   Active Devices : 2
> Working Devices : 2
>   Failed Devices : 0
>    Spare Devices : 0
>         Checksum : c4195c85 - correct
>           Events : 34
> 
> 
>        Number   Major   Minor   RaidDevice State
> this     0       8       16        0      active sync   /dev/sdb
> 
>     0     0       8       16        0      active sync   /dev/sdb
>     1     1       8       32        1      active sync   /dev/sdc
> 
> --- end of output
> 
> Output from mdadm --examine /dev/sdc
> 
> /dev/sdc:
>            Magic : a92b4efc
>          Version : 00.90.00
>             UUID : 1443e74d:f63f16ab:d527ef8c:7225e0b0
>    Creation Time : Tue Nov  8 13:14:48 2011
>       Raid Level : raid1
>    Used Dev Size : 732574464 (698.64 GiB 750.16 GB)
>       Array Size : 732574464 (698.64 GiB 750.16 GB)
>     Raid Devices : 2
>    Total Devices : 2
> Preferred Minor : 0
> 
>      Update Time : Tue Nov  8 16:05:42 2011
>            State : clean
>   Active Devices : 2
> Working Devices : 2
>   Failed Devices : 0
>    Spare Devices : 0
>         Checksum : c4195c97 - correct
>           Events : 34
> 
> 
>        Number   Major   Minor   RaidDevice State
> this     1       8       32        1      active sync   /dev/sdc
> 
>     0     0       8       16        0      active sync   /dev/sdb
>     1     1       8       32        1      active sync   /dev/sdc
> 
> ---- end of output
> 
> So - am I truly up the creek without a paddle? Is there any way to 
> recover this array? I have backups of most of it, but it will take a 
> while to find and restore. And for sure something will be lost.
> 

Certainly most of the data should be there still. I don't suppose you
have a copy of the mdadm --examine output from the old system at all?

Cheers,
    Robin
-- 
     ___        
    ( ' }     |       Robin Hill        <robin@robinhill.me.uk> |
   / / )      | Little Jim says ....                            |
  // !!       |      "He fallen in de water !!"                 |

[-- Attachment #2: Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Raid Problem - Unknown File System Type
  2011-11-09 16:12 Raid Problem - Unknown File System Type William Colls
  2011-11-09 16:39 ` Robin Hill
@ 2011-11-09 17:07 ` Gordon Henderson
  1 sibling, 0 replies; 16+ messages in thread
From: Gordon Henderson @ 2011-11-09 17:07 UTC (permalink / raw)
  To: linux-raid@vger.kernel.org

On Wed, 9 Nov 2011, William Colls wrote:

> Environment
>
> Kubuntu Linux 10.04.3 LTS
> mdadm 2.6.7.1-1ubuntu15
>
> I have two identical disks that were in a raid configuration in another 
> machine (also running 10.04). I removed them from the old machine, mounted 
> them in a new machine, booted up, and at a terminal prompt as root issued
>
> mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb /dev/sdc

Did you mean create or assemble?

If you used create it would have nuked all the data - including partition 
table and filesystem..

... actually, I don't know the precise mechanism for a create on raid 1 - 
if it just copies one disk to the other, then the data might well be 
intect if you can recover the partition table, but if it writes zeros to 
both disks then you're out of luck ...

Gordon

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Raid Problem - Unknown File System Type
  2011-11-09 16:39 ` Robin Hill
@ 2011-11-09 17:12   ` William Colls
  2011-11-09 18:55     ` Phil Turmel
  0 siblings, 1 reply; 16+ messages in thread
From: William Colls @ 2011-11-09 17:12 UTC (permalink / raw)
  To: linux-raid@vger.kernel.org

On 11/09/2011 11:39 AM, Robin Hill wrote:
> On Wed Nov 09, 2011 at 11:12:59AM -0500, William Colls wrote:
>
>>
>> Environment
>>
>> Kubuntu Linux 10.04.3 LTS
>> mdadm 2.6.7.1-1ubuntu15
>>
>> I have two identical disks that were in a raid configuration in another
>> machine (also running 10.04). I removed them from the old machine,
>> mounted them in a new machine, booted up, and at a terminal prompt as
>> root issued
>>
>> mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb /dev/sdc
>>
> You should have just assembled them. You've now created a new array
> instead of just assembling the old one.

I thought, at the time, that I needed to do the create so that the 
/dev/md0 device would be created properly (new machine had no raid before).

>
>> The configuration in the old machine was raid 1.
>>
>> I checked the contents of /proc/mdstat and it confirmed that md0 was
>> indeed running, with 2 devices, as expected. But it also said it was
>> resyncing the disks, which I didn't expect.
>>
>> When the reync completed, I was unable to mount /dev/md0p1. Specifying
>> -t ext3 in the mount command gives the error message "wrong fs, bad
>> option, bad superblock on /dev/md0p1". Trying mount with no -t gives the
>> error "unknown file type linux_raid_member". Looking at the disks with
>> Gparted, confims that the system sees the disks, but the filesystem
>> shows as unknown.
>>
> It would look like the old array was either created with an older mdadm
> version (with different defaults) or used some non-default parameter
> values.
>
>> The output from mdamd --detail /dev/sdb
>>
>> /dev/sdb:
>>             Magic : a92b4efc
>>           Version : 00.90.00
>>              UUID : 1443e74d:f63f16ab:d527ef8c:7225e0b0
>>     Creation Time : Tue Nov  8 13:14:48 2011
>>        Raid Level : raid1
>>     Used Dev Size : 732574464 (698.64 GiB 750.16 GB)
>>        Array Size : 732574464 (698.64 GiB 750.16 GB)
>>      Raid Devices : 2
>>     Total Devices : 2
>> Preferred Minor : 0
>>
>>       Update Time : Tue Nov  8 16:05:42 2011
>>             State : clean
>>    Active Devices : 2
>> Working Devices : 2
>>    Failed Devices : 0
>>     Spare Devices : 0
>>          Checksum : c4195c85 - correct
>>            Events : 34
>>
>>
>>         Number   Major   Minor   RaidDevice State
>> this     0       8       16        0      active sync   /dev/sdb
>>
>>      0     0       8       16        0      active sync   /dev/sdb
>>      1     1       8       32        1      active sync   /dev/sdc
>>
>> --- end of output
>>
>> Output from mdadm --examine /dev/sdc
>>
>> /dev/sdc:
>>             Magic : a92b4efc
>>           Version : 00.90.00
>>              UUID : 1443e74d:f63f16ab:d527ef8c:7225e0b0
>>     Creation Time : Tue Nov  8 13:14:48 2011
>>        Raid Level : raid1
>>     Used Dev Size : 732574464 (698.64 GiB 750.16 GB)
>>        Array Size : 732574464 (698.64 GiB 750.16 GB)
>>      Raid Devices : 2
>>     Total Devices : 2
>> Preferred Minor : 0
>>
>>       Update Time : Tue Nov  8 16:05:42 2011
>>             State : clean
>>    Active Devices : 2
>> Working Devices : 2
>>    Failed Devices : 0
>>     Spare Devices : 0
>>          Checksum : c4195c97 - correct
>>            Events : 34
>>
>>
>>         Number   Major   Minor   RaidDevice State
>> this     1       8       32        1      active sync   /dev/sdc
>>
>>      0     0       8       16        0      active sync   /dev/sdb
>>      1     1       8       32        1      active sync   /dev/sdc
>>
>> ---- end of output
>>
>> So - am I truly up the creek without a paddle? Is there any way to
>> recover this array? I have backups of most of it, but it will take a
>> while to find and restore. And for sure something will be lost.
>>
>
> Certainly most of the data should be there still. I don't suppose you
> have a copy of the mdadm --examine output from the old system at all?

No output from the original setup.

>
> Cheers,
>      Robin


-- 
I know you believe that you understand what you think I said, but I am 
not sure that you realize that what you heard was not what I ment.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Raid Problem - Unknown File System Type
  2011-11-09 17:12   ` William Colls
@ 2011-11-09 18:55     ` Phil Turmel
  2011-11-09 19:57       ` William Colls
  0 siblings, 1 reply; 16+ messages in thread
From: Phil Turmel @ 2011-11-09 18:55 UTC (permalink / raw)
  To: William Colls; +Cc: linux-raid@vger.kernel.org

Hi William,

On 11/09/2011 12:12 PM, William Colls wrote:
[...]

> I thought, at the time, that I needed to do the create so that the /dev/md0 device would be created properly (new machine had no raid before).

That's the "--auto" option, which has sane defaults.

[...]

> No output from the original setup.

From what you've described so far, a likely possibility is that the original raid 1 was using metadata version 1.1 or 1.2, which put the superblock near the beginning of the disks.  The default "--create" metadata in that old version of mdadm is 0.9, as you can see in your reports.

If so, you've likely only lost a tiny bit of data at the end of the volumes where the 0.90 superblock has been written.  (I'm also going to assume that the two disks were in-sync in old box before you moved them, so the re-sync wouldn't do any harm.)

A dump of the first 8K of your drives might be helpful here.

dd if=/dev/sdb count=16 2>/dev/null |hexdump -C

dd if=/dev/sdc count=16 2>/dev/null |hexdump -C

Regards,

Phil

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Raid Problem - Unknown File System Type
  2011-11-09 18:55     ` Phil Turmel
@ 2011-11-09 19:57       ` William Colls
  2011-11-09 20:05         ` Phil Turmel
  0 siblings, 1 reply; 16+ messages in thread
From: William Colls @ 2011-11-09 19:57 UTC (permalink / raw)
  To: Phil Turmel; +Cc: linux-raid@vger.kernel.org

On 11/09/2011 01:55 PM, Phil Turmel wrote:
> Hi William,
>
> On 11/09/2011 12:12 PM, William Colls wrote:
> [...]
>
>> I thought, at the time, that I needed to do the create so that the /dev/md0 device would be created properly (new machine had no raid before).
>
> That's the "--auto" option, which has sane defaults.
>
> [...]
>
>> No output from the original setup.
>
>> From what you've described so far, a likely possibility is that the original raid 1 was using metadata version 1.1 or 1.2, which put the superblock near the beginning of the disks.  The default "--create" metadata in that old version of mdadm is 0.9, as you can see in your reports.
>
> If so, you've likely only lost a tiny bit of data at the end of the volumes where the 0.90 superblock has been written.  (I'm also going to assume that the two disks were in-sync in old box before you moved them, so the re-sync wouldn't do any harm.)
>
> A dump of the first 8K of your drives might be helpful here.
>
> dd if=/dev/sdb count=16 2>/dev/null |hexdump -C
>
> dd if=/dev/sdc count=16 2>/dev/null |hexdump -C
>
> Regards,
>
> Phil
>

Did the dumps to text files. diff shows no difference between the files. 
Should I be looking for anything particular? There are sections which 
seem to have data, and several blocks that are all zeros, but I don't 
see any particular patterns, and only a very few obvious text strings.

Thanks for your interest and help.

-- 
I know you believe that you understand what you think I said, but I am 
not sure that you realize that what you heard was not what I ment.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Raid Problem - Unknown File System Type
  2011-11-09 19:57       ` William Colls
@ 2011-11-09 20:05         ` Phil Turmel
       [not found]           ` <4EBAE90F.2030104@rogers.com>
  0 siblings, 1 reply; 16+ messages in thread
From: Phil Turmel @ 2011-11-09 20:05 UTC (permalink / raw)
  To: William Colls; +Cc: linux-raid@vger.kernel.org

On 11/09/2011 02:57 PM, William Colls wrote:
> On 11/09/2011 01:55 PM, Phil Turmel wrote:
>> Hi William,
>>
>> On 11/09/2011 12:12 PM, William Colls wrote:
>> [...]
>>
>>> I thought, at the time, that I needed to do the create so that the /dev/md0 device would be created properly (new machine had no raid before).
>>
>> That's the "--auto" option, which has sane defaults.
>>
>> [...]
>>
>>> No output from the original setup.
>>
>>> From what you've described so far, a likely possibility is that the original raid 1 was using metadata version 1.1 or 1.2, which put the superblock near the beginning of the disks.  The default "--create" metadata in that old version of mdadm is 0.9, as you can see in your reports.
>>
>> If so, you've likely only lost a tiny bit of data at the end of the volumes where the 0.90 superblock has been written.  (I'm also going to assume that the two disks were in-sync in old box before you moved them, so the re-sync wouldn't do any harm.)
>>
>> A dump of the first 8K of your drives might be helpful here.
>>
>> dd if=/dev/sdb count=16 2>/dev/null |hexdump -C
>>
>> dd if=/dev/sdc count=16 2>/dev/null |hexdump -C
>>
>> Regards,
>>
>> Phil
>>
> 
> Did the dumps to text files. diff shows no difference between the files. Should I be looking for anything particular? There are sections which seem to have data, and several blocks that are all zeros, but I don't see any particular patterns, and only a very few obvious text strings.
> 
> Thanks for your interest and help.

I was expecting them to be nearly identical, unless they're corrupted.  I was particularly interested to see if an MD signature shows up at offsets 0 or 1000H, with device names at 20H or 1020H.

Or there might have been a partition table or boot loader showing, or fragments thereof.

I can't see them, of course.  (Hint.)

Phil

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Raid Problem - Unknown File System Type
       [not found]           ` <4EBAE90F.2030104@rogers.com>
@ 2011-11-09 21:45             ` Phil Turmel
  2011-11-10  3:36               ` William Colls
  0 siblings, 1 reply; 16+ messages in thread
From: Phil Turmel @ 2011-11-09 21:45 UTC (permalink / raw)
  To: William Colls; +Cc: linux-raid@vger.kernel.org

[added the list back.  Please use reply-to-all on kernel.org lists.]

On 11/09/2011 03:56 PM, William Colls wrote:
> On 11/09/2011 03:05 PM, Phil Turmel wrote:
[...]
>> Or there might have been a partition table or boot loader showing, or fragments thereof.
>>
>> I can't see them, of course.  (Hint.)
> 
> Files attached

OK. So that wasn't it.  GRUB is in the first sector, with a MBR partition table identifying a single 750G partition starting at sector 63.

So something else is wrong.  Maybe your kernel is different, and just doesn't have the module for the FS.  Or one of the BIOSes messes with the apparent disk capacity.  Or something else is interfering.

Please show:

cat /proc/filesystems
cat /proc/partitions
fdisk -l
lsdrv

... and repeat on the old system if at all possible.  Preferably with one of the disks plugged back into it.

a complete dmesg from the old system could also be useful.

You can get lsdrv from: http://github.com/pturmel/lsdrv

Phil

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Raid Problem - Unknown File System Type
  2011-11-09 21:45             ` Phil Turmel
@ 2011-11-10  3:36               ` William Colls
  2011-11-10  3:57                 ` Phil Turmel
  2011-11-10  8:53                 ` Robin Hill
  0 siblings, 2 replies; 16+ messages in thread
From: William Colls @ 2011-11-10  3:36 UTC (permalink / raw)
  To: Phil Turmel; +Cc: linux-raid@vger.kernel.org

[ .... ]
>
> OK. So that wasn't it.  GRUB is in the first sector, with a MBR partition table identifying a single 750G partition starting at sector 63.

The array was not bootable in its original configuration, so I am 
surprised that GRUB would be on the disk, but the single partition of 
750G is correct.

> So something else is wrong.  Maybe your kernel is different, and just doesn't have the module for the FS.  Or one of the BIOSes messes with the apparent disk capacity.  Or something else is interfering.
>
> Please show:
>
> cat /proc/filesystems >

nodev   sysfs
nodev   rootfs
nodev   bdev
nodev   proc
nodev   cgroup
nodev   cpuset
nodev   tmpfs
nodev   devtmpfs
nodev   debugfs
nodev   securityfs
odev   sockfs
nodev   pipefs
nodev   anon_inodefs
nodev   inotifyfs
nodev   devpts
         ext3
         ext2
         ext4
nodev   ramfs
nodev   hugetlbfs
nodev   ecryptfs
nodev   fuse
         fuseblk
nodev   fusectl
nodev   mqueue
         vfat
         iso9660

> cat /proc/partitions

major minor  #blocks  name

    8        0  312571224 sda
    8        1  306929664 sda1
    8        2          1 sda2
    8        5    5639168 sda5
    8       16  732574584 sdb
    8       32  732574584 sdc
    9        0  732574464 md0
  259        0  732572001 md0p1

> fdisk -l

Disk /dev/sda: 320.1 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000b9f04

    Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1       38212   306929664   83  Linux
/dev/sda2           38212       38914     5639169    5  Extended
/dev/sda5           38212       38914     5639168   82  Linux swap / Solaris

Disk /dev/sdb: 750.2 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000bb73e

    Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *           1       91201   732572001   83  Linux

Disk /dev/sdc: 750.2 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000bb73e

    Device Boot      Start         End      Blocks   Id  System
/dev/sdc1   *           1       91201   732572001   83  Linux

Disk /dev/md0: 750.2 GB, 750156251136 bytes
255 heads, 63 sectors/track, 91201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000bb73e

     Device Boot      Start         End      Blocks   Id  System
/dev/md0p1   *           1       91201   732572001   83  Linux

> lsdrv

**Warning** The following utility(ies) failed to execute:
   sginfo
   pvs
   lvs
Some information may be missing.

PCI [pata_atiixp] 00:14.1 IDE interface: ATI Technologies Inc SB600 IDE
  ├─scsi 0:0:0:0 ATA WDC WD3200AAJB-0
  │  └─sda: [8:0] Empty/Unknown 298.09g
  │     ├─sda1: [8:1] Empty/Unknown 292.71g
  │     │  └─Mounted as 
/dev/disk/by-uuid/0a85841d-6b71-43ba-8558-3f86dce72359 @ /
  │     ├─sda2: [8:2] Empty/Unknown 1.00k
  │     └─sda5: [8:5] Empty/Unknown 5.38g
  └─scsi 1:x:x:x [Empty]
PCI [ahci] 00:12.0 SATA controller: ATI Technologies Inc SB600 
Non-Raid-5 SATA
  ├─scsi 2:0:0:0 ATA WDC WD7500AAKS-0
  │  └─sdb: [8:16] Empty/Unknown 698.64g
  │     └─md0: [9:0] Empty/Unknown 698.64g
  │        └─md0p1: [259:0] Empty/Unknown 698.64g
  ├─scsi 3:0:0:0 ASUS DRW-24B1ST   a {B2D0CL124266}
  │  └─sr0: [11:0] Empty/Unknown 1.00g
  ├─scsi 4:0:0:0 ATA WDC WD7500AAKS-0
  │  └─sdc: [8:32] Empty/Unknown 698.64g
  └─scsi 5:x:x:x [Empty]
Other Block Devices
  ├─ram0: [1:0] Empty/Unknown 64.00m
  ├─ram1: [1:1] Empty/Unknown 64.00m
  ├─ram2: [1:2] Empty/Unknown 64.00m
  ├─ram3: [1:3] Empty/Unknown 64.00m
  ├─ram4: [1:4] Empty/Unknown 64.00m
  ├─ram5: [1:5] Empty/Unknown 64.00m
  ├─ram6: [1:6] Empty/Unknown 64.00m
  ├─ram7: [1:7] Empty/Unknown 64.00m
  ├─ram8: [1:8] Empty/Unknown 64.00m
  ├─ram9: [1:9] Empty/Unknown 64.00m
  ├─ram10: [1:10] Empty/Unknown 64.00m
  ├─ram11: [1:11] Empty/Unknown 64.00m
  ├─ram12: [1:12] Empty/Unknown 64.00m
  ├─ram13: [1:13] Empty/Unknown 64.00m
  ├─ram14: [1:14] Empty/Unknown 64.00m
  └─ram15: [1:15] Empty/Unknown 64.00m

> ... and repeat on the old system if at all possible.  Preferably with one of the disks plugged back into it.

The old system is not available
.
> a complete dmesg from the old system could also be useful.
>
> You can get lsdrv from: http://github.com/pturmel/lsdrv
>
> Phil
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>


-- 
I know you believe that you understand what you think I said, but I am 
not sure that you realize that what you heard was not what I ment.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Raid Problem - Unknown File System Type
  2011-11-10  3:36               ` William Colls
@ 2011-11-10  3:57                 ` Phil Turmel
  2011-11-10 15:23                   ` William Colls
  2011-11-10  8:53                 ` Robin Hill
  1 sibling, 1 reply; 16+ messages in thread
From: Phil Turmel @ 2011-11-10  3:57 UTC (permalink / raw)
  To: William Colls; +Cc: linux-raid@vger.kernel.org

On 11/09/2011 10:36 PM, William Colls wrote:
> [ .... ]
>>
>> OK. So that wasn't it.  GRUB is in the first sector, with a MBR partition table identifying a single 750G partition starting at sector 63.
> 
> The array was not bootable in its original configuration, so I am surprised that GRUB would be on the disk, but the single partition of 750G is correct.
> 
>> So something else is wrong.  Maybe your kernel is different, and just doesn't have the module for the FS.  Or one of the BIOSes messes with the apparent disk capacity.  Or something else is interfering.
>>
>> Please show:
>>
>> cat /proc/filesystems >
> 
> nodev   sysfs
> nodev   rootfs
> nodev   bdev
> nodev   proc
> nodev   cgroup
> nodev   cpuset
> nodev   tmpfs
> nodev   devtmpfs
> nodev   debugfs
> nodev   securityfs
> odev   sockfs
> nodev   pipefs
> nodev   anon_inodefs
> nodev   inotifyfs
> nodev   devpts
>         ext3
>         ext2
>         ext4
> nodev   ramfs
> nodev   hugetlbfs
> nodev   ecryptfs
> nodev   fuse
>         fuseblk
> nodev   fusectl
> nodev   mqueue
>         vfat
>         iso9660

Hmmm.  None of the common extras, like reiserfs, xfs, or jfs.  Nor support for DVDs w/ udf.

>> cat /proc/partitions
> 
> major minor  #blocks  name
> 
>    8        0  312571224 sda
>    8        1  306929664 sda1
>    8        2          1 sda2
>    8        5    5639168 sda5
>    8       16  732574584 sdb
>    8       32  732574584 sdc
>    9        0  732574464 md0
>  259        0  732572001 md0p1

OK.

>> fdisk -l
> 
> Disk /dev/sda: 320.1 GB, 320072933376 bytes
> 255 heads, 63 sectors/track, 38913 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x000b9f04
> 
>    Device Boot      Start         End      Blocks   Id  System
> /dev/sda1   *           1       38212   306929664   83  Linux
> /dev/sda2           38212       38914     5639169    5  Extended
> /dev/sda5           38212       38914     5639168   82  Linux swap / Solaris
> 
> Disk /dev/sdb: 750.2 GB, 750156374016 bytes
> 255 heads, 63 sectors/track, 91201 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x000bb73e
> 
>    Device Boot      Start         End      Blocks   Id  System
> /dev/sdb1   *           1       91201   732572001   83  Linux
> 
> Disk /dev/sdc: 750.2 GB, 750156374016 bytes
> 255 heads, 63 sectors/track, 91201 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x000bb73e
> 
>    Device Boot      Start         End      Blocks   Id  System
> /dev/sdc1   *           1       91201   732572001   83  Linux
> 
> Disk /dev/md0: 750.2 GB, 750156251136 bytes
> 255 heads, 63 sectors/track, 91201 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x000bb73e
> 
>     Device Boot      Start         End      Blocks   Id  System
> /dev/md0p1   *           1       91201   732572001   83  Linux

OK.

>> lsdrv
> 
> **Warning** The following utility(ies) failed to execute:
>   sginfo
>   pvs
>   lvs
> Some information may be missing.

Missing pvs and lvs means LVM is not installed.  Do you recall if the array was mounted directly?

> PCI [pata_atiixp] 00:14.1 IDE interface: ATI Technologies Inc SB600 IDE
>  ├─scsi 0:0:0:0 ATA WDC WD3200AAJB-0
>  │  └─sda: [8:0] Empty/Unknown 298.09g
>  │     ├─sda1: [8:1] Empty/Unknown 292.71g
>  │     │  └─Mounted as /dev/disk/by-uuid/0a85841d-6b71-43ba-8558-3f86dce72359 @ /
>  │     ├─sda2: [8:2] Empty/Unknown 1.00k
>  │     └─sda5: [8:5] Empty/Unknown 5.38g
>  └─scsi 1:x:x:x [Empty]
> PCI [ahci] 00:12.0 SATA controller: ATI Technologies Inc SB600 Non-Raid-5 SATA
>  ├─scsi 2:0:0:0 ATA WDC WD7500AAKS-0
>  │  └─sdb: [8:16] Empty/Unknown 698.64g
>  │     └─md0: [9:0] Empty/Unknown 698.64g
>  │        └─md0p1: [259:0] Empty/Unknown 698.64g
>  ├─scsi 3:0:0:0 ASUS DRW-24B1ST   a {B2D0CL124266}
>  │  └─sr0: [11:0] Empty/Unknown 1.00g
>  ├─scsi 4:0:0:0 ATA WDC WD7500AAKS-0
>  │  └─sdc: [8:32] Empty/Unknown 698.64g
>  └─scsi 5:x:x:x [Empty]
> Other Block Devices
>  ├─ram0: [1:0] Empty/Unknown 64.00m
>  ├─ram1: [1:1] Empty/Unknown 64.00m
>  ├─ram2: [1:2] Empty/Unknown 64.00m
>  ├─ram3: [1:3] Empty/Unknown 64.00m
>  ├─ram4: [1:4] Empty/Unknown 64.00m
>  ├─ram5: [1:5] Empty/Unknown 64.00m
>  ├─ram6: [1:6] Empty/Unknown 64.00m
>  ├─ram7: [1:7] Empty/Unknown 64.00m
>  ├─ram8: [1:8] Empty/Unknown 64.00m
>  ├─ram9: [1:9] Empty/Unknown 64.00m
>  ├─ram10: [1:10] Empty/Unknown 64.00m
>  ├─ram11: [1:11] Empty/Unknown 64.00m
>  ├─ram12: [1:12] Empty/Unknown 64.00m
>  ├─ram13: [1:13] Empty/Unknown 64.00m
>  ├─ram14: [1:14] Empty/Unknown 64.00m
>  └─ram15: [1:15] Empty/Unknown 64.00m

No surprises, but not enough information.

>> ... and repeat on the old system if at all possible.  Preferably with one of the disks plugged back into it.
> 
> The old system is not available

Unfortunate.

Please show a hexdump of the first 8k of /dev/md0p1.  That should give us a signature to hunt down.

In the meantime, consider installing some FS support packages:

xfsprogs
reiserfsprogs
jfsutils
btrfs-tools
udftools
lvm2


Phil
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Raid Problem - Unknown File System Type
  2011-11-10  3:36               ` William Colls
  2011-11-10  3:57                 ` Phil Turmel
@ 2011-11-10  8:53                 ` Robin Hill
  1 sibling, 0 replies; 16+ messages in thread
From: Robin Hill @ 2011-11-10  8:53 UTC (permalink / raw)
  To: William Colls; +Cc: linux-raid@vger.kernel.org

[-- Attachment #1: Type: text/plain, Size: 1114 bytes --]

On Wed Nov 09, 2011 at 10:36:05PM -0500, William Colls wrote:

> [ .... ]
> >
> > OK. So that wasn't it.  GRUB is in the first sector, with a MBR
> > partition table identifying a single 750G partition starting at
> > sector 63.
> 
> The array was not bootable in its original configuration, so I am 
> surprised that GRUB would be on the disk, but the single partition of 
> 750G is correct.
> 
If the disk is partitioned then did you actually want the array made up
of the partitions rather than the whole disk? In which case stopping the
array and recreating using the partitions may be all that's needed:
    mdadm -S /dev/md0
    mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb1
    /dev/sdc1 --assume-clean

That could cause further damage if that's not how it should be set up,
so I'd recommend thinking hard before running it!

Cheers,
    Robin
-- 
     ___        
    ( ' }     |       Robin Hill        <robin@robinhill.me.uk> |
   / / )      | Little Jim says ....                            |
  // !!       |      "He fallen in de water !!"                 |

[-- Attachment #2: Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Raid Problem - Unknown File System Type
  2011-11-10  3:57                 ` Phil Turmel
@ 2011-11-10 15:23                   ` William Colls
  2011-11-10 15:48                     ` Phil Turmel
  0 siblings, 1 reply; 16+ messages in thread
From: William Colls @ 2011-11-10 15:23 UTC (permalink / raw)
  To: Phil Turmel; +Cc: linux-raid@vger.kernel.org

On 11/09/2011 10:57 PM, Phil Turmel wrote:

[....]

>>> lsdrv
>>
>> **Warning** The following utility(ies) failed to execute:
>>    sginfo
>>    pvs
>>    lvs
>> Some information may be missing.
>
> Missing pvs and lvs means LVM is not installed.  Do you recall if the array was mounted directly?

the array was mounted as follows

mount -t ext3 /dev/md0p1 /opt/share

LVM was not installed on the old system, nor is it installed on the new 
machine

> [....]

> Please show a hexdump of the first 8k of /dev/md0p1.  That should give us a signature to hunt down.

00000000  fa b8 00 10 8e d0 bc 00  b0 b8 00 00 8e d8 8e c0 
|................|
00000010  fb be 00 7c bf 00 06 b9  00 02 f3 a4 ea 21 06 00 
|...|.........!..|
00000020  00 be be 07 38 04 75 0b  83 c6 10 81 fe fe 07 75 
|....8.u........u|
00000030  f3 eb 16 b4 02 b0 01 bb  00 7c b2 80 8a 74 01 8b 
|.........|...t..|
00000040  4c 02 cd 13 ea 00 7c 00  00 eb fe 00 00 00 00 00 
|L.....|.........|
00000050  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00 
|................|
*
000001b0  00 00 00 00 00 00 00 00  17 29 06 00 00 00 00 01 
|.........)......|
000001c0  01 00 83 fe ff ff 3f 00  00 00 01 14 54 57 00 00 
|......?.....TW..|
000001d0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00 
|................|
*
000001f0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 55 aa 
|..............U.|
00000200  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00 
|................|
*
00002000


> In the meantime, consider installing some FS support packages:
>
> xfsprogs
> reiserfsprogs
> jfsutils
> btrfs-tools
> udftools
> lvm2
>
>
> Phil
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>


-- 
I know you believe that you understand what you think I said, but I am 
not sure that you realize that what you heard was not what I ment.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Raid Problem - Unknown File System Type
  2011-11-10 15:23                   ` William Colls
@ 2011-11-10 15:48                     ` Phil Turmel
  2011-11-10 16:12                       ` John Robinson
  0 siblings, 1 reply; 16+ messages in thread
From: Phil Turmel @ 2011-11-10 15:48 UTC (permalink / raw)
  To: William Colls; +Cc: linux-raid@vger.kernel.org

On 11/10/2011 10:23 AM, William Colls wrote:
[...]
> the array was mounted as follows
> 
> mount -t ext3 /dev/md0p1 /opt/share
> 
> LVM was not installed on the old system, nor is it installed on the new machine

OK.

>> Please show a hexdump of the first 8k of /dev/md0p1.  That should give us a signature to hunt down.
> 
> 00000000  fa b8 00 10 8e d0 bc 00  b0 b8 00 00 8e d8 8e c0 |................|
> 00000010  fb be 00 7c bf 00 06 b9  00 02 f3 a4 ea 21 06 00 |...|.........!..|
> 00000020  00 be be 07 38 04 75 0b  83 c6 10 81 fe fe 07 75 |....8.u........u|
> 00000030  f3 eb 16 b4 02 b0 01 bb  00 7c b2 80 8a 74 01 8b |.........|...t..|
> 00000040  4c 02 cd 13 ea 00 7c 00  00 eb fe 00 00 00 00 00 |L.....|.........|
> 00000050  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00 |................|
> *
> 000001b0  00 00 00 00 00 00 00 00  17 29 06 00 00 00 00 01 |.........)......|
> 000001c0  01 00 83 fe ff ff 3f 00  00 00 01 14 54 57 00 00 |......?.....TW..|
> 000001d0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00 |................|
> *
> 000001f0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 55 aa |..............U.|
> 00000200  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00 |................|
> *
> 00002000

The signature is definitely not there for superblock 0, and it doesn't look like any other structure I'm familiar with.

Since you are confident it was ext3, try:
dumpe2fs |grep superblock

or:
dumpe2fs -f |grep superblock

That might find a backup superblock that can then be used with fsck.  See e2fsck(8), option "-b".

I would run e2fsck in read-only mode for each backup superblock attempted, to see how bad the situation is.  If its mostly clean, then it would be safe to proceed with the actual repair.

If none of this works, I'm out of ideas.  You'd probably want to ask for more help on the linux-ext4 mailing list.

Phil

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Raid Problem - Unknown File System Type
  2011-11-10 15:48                     ` Phil Turmel
@ 2011-11-10 16:12                       ` John Robinson
  2011-11-10 16:32                         ` Phil Turmel
  0 siblings, 1 reply; 16+ messages in thread
From: John Robinson @ 2011-11-10 16:12 UTC (permalink / raw)
  To: Phil Turmel; +Cc: William Colls, linux-raid@vger.kernel.org

On 10/11/2011 15:48, Phil Turmel wrote:
> On 11/10/2011 10:23 AM, William Colls wrote: [...]
>> the array was mounted as follows
>>
>> mount -t ext3 /dev/md0p1 /opt/share
>>
>> LVM was not installed on the old system, nor is it installed on the
>> new machine
[...]
> If none of this works, I'm out of ideas.  You'd probably want to ask
> for more help on the linux-ext4 mailing list.

The only thing that occurs to me is the possibility that both the md 
device was made from partitions, not whole drives, and the md device was 
itself partitioned. I wouldn't know quite how to go about checking though.

Cheers,

John.

-- 
John Robinson, yuiop IT services
0131 557 9577 / 07771 784 058
46/12 Broughton Road, Edinburgh EH7 4EE

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Raid Problem - Unknown File System Type
  2011-11-10 16:12                       ` John Robinson
@ 2011-11-10 16:32                         ` Phil Turmel
  2011-11-14 15:01                           ` William Colls
  0 siblings, 1 reply; 16+ messages in thread
From: Phil Turmel @ 2011-11-10 16:32 UTC (permalink / raw)
  To: John Robinson; +Cc: William Colls, linux-raid@vger.kernel.org

On 11/10/2011 11:12 AM, John Robinson wrote:
> On 10/11/2011 15:48, Phil Turmel wrote:
>> On 11/10/2011 10:23 AM, William Colls wrote: [...]
>>> the array was mounted as follows
>>> 
>>> mount -t ext3 /dev/md0p1 /opt/share
>>> 
>>> LVM was not installed on the old system, nor is it installed on
>>> the new machine
> [...]
>> If none of this works, I'm out of ideas.  You'd probably want to
>> ask for more help on the linux-ext4 mailing list.
> 
> The only thing that occurs to me is the possibility that both the md
> device was made from partitions, not whole drives, and the md device
> was itself partitioned. I wouldn't know quite how to go about
> checking though.

Great call!  I just looked back at the hexdump, and sure enough, there's a nested MBR there.  It's missing a bootloader, which threw me off, but there is a single partition defined.

William, you can thank John.

Here's what you need to do:

mdadm --stop /dev/md0
mdadm --zero-superblock /dev/sd[bc]

partx -a /dev/sdb
partx -a /dev/sdc

cat /proc/partitions
(to verify that /dev/sda1 and /dev/sdb1 are present)

mdadm --create --assume-clean /dev/md0 --level=1 -n 2 /dev/sda1 /dev/sdb1

cat /proc/partitions
(to verify that /dev/md0p1 is present)

mount ....

If you want to minimize the chance of mdadm getting confused again, you probably want v1.x metadata.  (But not just yet.  Get your data back, first.)  It includes offset information that avoids the ambiguity when v0.90 metadata is on the last partition of a disk.  Otherwise, you need to set up mdadm.conf to exclude /dev/sdb and /dev/sdc from consideration, and make sure that ends up in your initramfs.

Also, your partitions, both the outer and the nested, start with sector 63.  This is bad for performance on modern drives, as most big ones have physical 4k sectors.  After you make your backup, I suggest you recreate your entire setup from scratch, making sure alignment is appropriate, and switching to v1.1 or v1.2 metadata.

Phil

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Raid Problem - Unknown File System Type
  2011-11-10 16:32                         ` Phil Turmel
@ 2011-11-14 15:01                           ` William Colls
  0 siblings, 0 replies; 16+ messages in thread
From: William Colls @ 2011-11-14 15:01 UTC (permalink / raw)
  To: Phil Turmel; +Cc: John Robinson, linux-raid@vger.kernel.org


First let ma apologise for the long silence. I wanted to take a "minute" 
to think carefully about how to proceed to maximize my chances of 
getting a good out come.

Having said that, the outcome was all that I could have hoped for and 
more. I have recovered the array, and all the data seems to be complete 
and in tact. Thank you all most sincerely for you thoughtful and helpful 
replies. Without your interest, I would have lost everything. Again 
thank you for all your assistance. Once again, the open source community 
shows itself to be the greatest place to be in the computer world, and I 
feel privilaged to be able to stand at the edge, and try to be part of it.

Thanks all.

On 11/10/2011 11:32 AM, Phil Turmel wrote:
> On 11/10/2011 11:12 AM, John Robinson wrote:
>> On 10/11/2011 15:48, Phil Turmel wrote:
>>> On 11/10/2011 10:23 AM, William Colls wrote: [...]
>>>> the array was mounted as follows
>>>>
>>>> mount -t ext3 /dev/md0p1 /opt/share
>>>>
>>>> LVM was not installed on the old system, nor is it installed on
>>>> the new machine
>> [...]
>>> If none of this works, I'm out of ideas.  You'd probably want to
>>> ask for more help on the linux-ext4 mailing list.
>>
>> The only thing that occurs to me is the possibility that both the md
>> device was made from partitions, not whole drives, and the md device
>> was itself partitioned. I wouldn't know quite how to go about
>> checking though.
>
> Great call!  I just looked back at the hexdump, and sure enough, there's a nested MBR there.  It's missing a bootloader, which threw me off, but there is a single partition defined.
>
> William, you can thank John.
>
> Here's what you need to do:
>
> mdadm --stop /dev/md0
> mdadm --zero-superblock /dev/sd[bc]
>
> partx -a /dev/sdb
> partx -a /dev/sdc
>
> cat /proc/partitions
> (to verify that /dev/sda1 and /dev/sdb1 are present)
>
> mdadm --create --assume-clean /dev/md0 --level=1 -n 2 /dev/sda1 /dev/sdb1
>
> cat /proc/partitions
> (to verify that /dev/md0p1 is present)
>
> mount ....
>
> If you want to minimize the chance of mdadm getting confused again, you probably want v1.x metadata.  (But not just yet.  Get your data back, first.)  It includes offset information that avoids the ambiguity when v0.90 metadata is on the last partition of a disk.  Otherwise, you need to set up mdadm.conf to exclude /dev/sdb and /dev/sdc from consideration, and make sure that ends up in your initramfs.
>
> Also, your partitions, both the outer and the nested, start with sector 63.  This is bad for performance on modern drives, as most big ones have physical 4k sectors.  After you make your backup, I suggest you recreate your entire setup from scratch, making sure alignment is appropriate, and switching to v1.1 or v1.2 metadata.
>
> Phil
>


-- 
I know you believe that you understand what you think I said, but I am 
not sure that you realize that what you heard was not what I meant.

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2011-11-14 15:01 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-11-09 16:12 Raid Problem - Unknown File System Type William Colls
2011-11-09 16:39 ` Robin Hill
2011-11-09 17:12   ` William Colls
2011-11-09 18:55     ` Phil Turmel
2011-11-09 19:57       ` William Colls
2011-11-09 20:05         ` Phil Turmel
     [not found]           ` <4EBAE90F.2030104@rogers.com>
2011-11-09 21:45             ` Phil Turmel
2011-11-10  3:36               ` William Colls
2011-11-10  3:57                 ` Phil Turmel
2011-11-10 15:23                   ` William Colls
2011-11-10 15:48                     ` Phil Turmel
2011-11-10 16:12                       ` John Robinson
2011-11-10 16:32                         ` Phil Turmel
2011-11-14 15:01                           ` William Colls
2011-11-10  8:53                 ` Robin Hill
2011-11-09 17:07 ` Gordon Henderson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).