linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* RAID6 ext3 problems
       [not found] <CAKXovctDQdiMWEdzh6Jg8OkEjXX5OHZ+hubFV-GRsc8S2QHohQ@mail.gmail.com>
@ 2012-02-12 17:16 ` Jeff W
  2012-02-12 17:31   ` Mark Knecht
  0 siblings, 1 reply; 9+ messages in thread
From: Jeff W @ 2012-02-12 17:16 UTC (permalink / raw)
  To: linux-raid

Hello to all!
I've had a problem with my system that involves a RAID array with
mdadm on Debian linux and the ext3 filesystem. My partition on the
RAID array won't let me mount it anymore.

I have/had a RAID6 array of 7 500GB drives, formatted as one large
ext3 partition.  This array resided in a system that booted from a
separate 320GB hard drive, but recently that system drive bit the dust
and so was replaced. Upon reinstalling Debian on the system drive,
`mdadm --assemble --scan` didn't assemble the RAID array as it had in
times past, so I used 'fdisk -l' to find all the drives marked with
'fd' (Linux RAID) and manually did `mdadm --assemble /dev/md0
/dev/sdc1...` with the names of all the fd-marked drives (a command
I've also used to successfully assemble the array before). That
attempt didn't work because it said one of the drives was busy, and
I've determined after the fact that it was because I misread of
misunderstood the output of the first time I ran `mdadm --assemble
--scan` because it seems that it had in fact created two arrays, one
of the arrays (md0) containing one of the drives and the other array
(md1) containing the rest of the drives.  This confusing situation had
me concerned, I now know rightfully so, about data loss so I read the
man page for mdadm and googled looking for a hint at how to
troubleshoot or solve this, and I came across the `--examine` option
which I used to look at each of the drives marked with Linux RAID. All
of them except one, (sda1) had the same UID and what appeared to be
the correct metadata for the RAID array I was trying to recover, so I
tried assembling with the UID option which gave me an array with 5 out
of 7 of the component drives. So, it had missed a drive -- sda1. If
your wondering about the 7th drive, it didn't survive the move I went
through just before this, but RAID6 documentation says it can sustain
2 drive failures and continue operating, and I have only sustained two
drive failures right now. So unless one more drive dies, I should
still be able to access that array -- correct me if I'm wrong?
Anyway, after I got the 5 out of 7 drives in the array, I manually
added the 6th drive, sda1 to the array and it began repairing itself.
Phew, I thought, so I tried to mount the array, which I have done in
the past and was successful, but not this time. This time it threw the
same error as before, mount couldn't detect the filesystem type
because of some kind of superblock error.

Now, it's probably self-evident at this point that I'm not an expert,
but I'm hoping that you are and that you'll at least be able to tell
me what I did wrong so as to avoid doing it again, and at best be able
to tell me how I could recover my data.  At this point I'm confused
about what happened and how I could have possibly gotten myself in
this situation.  The RAID array wasn't assembled when I was
reinstalling Debian so that shouldn't have been able to wipe the
partition on the array, though it could have wiped sda1, but then, how
did the partition/superblock on the RAID disappear...
At present I've installed Ubuntu to the system drive, which in
hindsight was not an intelligent move because now I don't know what
version of mdadm I was using on Debian, though I was using 'stable'
with no apt pinning. I'm running testdisk to analyze the drives, I was
hoping it would be able to find a backup of the superblock but so far
all it's finding is HFS partitions which doesn't seem promising.

If anyone can shed any light on what I did wrong, if I encountered
some kind of known bug or unintentionally did this myself through
improper use of mdadm, any help at all would be hugely appreciated.

Thanks in advance,
Jeff.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: RAID6 ext3 problems
  2012-02-12 17:16 ` RAID6 ext3 problems Jeff W
@ 2012-02-12 17:31   ` Mark Knecht
       [not found]     ` <CAKXovcvvTioOP+JKd8b5-Hc1GZhAiqB85=A4d-cCP03uDPiGkg@mail.gmail.com>
  0 siblings, 1 reply; 9+ messages in thread
From: Mark Knecht @ 2012-02-12 17:31 UTC (permalink / raw)
  To: Jeff W; +Cc: linux-raid

On Sun, Feb 12, 2012 at 9:16 AM, Jeff W <jeff.welling@gmail.com> wrote:
> Hello to all!
> I've had a problem with my system that involves a RAID array with
> mdadm on Debian linux and the ext3 filesystem. My partition on the
> RAID array won't let me mount it anymore.
>
> I have/had a RAID6 array of 7 500GB drives, formatted as one large
> ext3 partition.  This array resided in a system that booted from a
> separate 320GB hard drive, but recently that system drive bit the dust
> and so was replaced. Upon reinstalling Debian on the system drive,
> `mdadm --assemble --scan` didn't assemble the RAID array as it had in
> times past, so I used 'fdisk -l' to find all the drives marked with
> 'fd' (Linux RAID) and manually did `mdadm --assemble /dev/md0
> /dev/sdc1...` with the names of all the fd-marked drives (a command
> I've also used to successfully assemble the array before). That
> attempt didn't work because it said one of the drives was busy, and
> I've determined after the fact that it was because I misread of
> misunderstood the output of the first time I ran `mdadm --assemble
> --scan` because it seems that it had in fact created two arrays, one
> of the arrays (md0) containing one of the drives and the other array
> (md1) containing the rest of the drives.  This confusing situation had
> me concerned, I now know rightfully so, about data loss so I read the
> man page for mdadm and googled looking for a hint at how to
> troubleshoot or solve this, and I came across the `--examine` option
> which I used to look at each of the drives marked with Linux RAID. All
> of them except one, (sda1) had the same UID and what appeared to be
> the correct metadata for the RAID array I was trying to recover, so I
> tried assembling with the UID option which gave me an array with 5 out
> of 7 of the component drives. So, it had missed a drive -- sda1. If
> your wondering about the 7th drive, it didn't survive the move I went
> through just before this, but RAID6 documentation says it can sustain
> 2 drive failures and continue operating, and I have only sustained two
> drive failures right now. So unless one more drive dies, I should
> still be able to access that array -- correct me if I'm wrong?
> Anyway, after I got the 5 out of 7 drives in the array, I manually
> added the 6th drive, sda1 to the array and it began repairing itself.
> Phew, I thought, so I tried to mount the array, which I have done in
> the past and was successful, but not this time. This time it threw the
> same error as before, mount couldn't detect the filesystem type
> because of some kind of superblock error.
>
> Now, it's probably self-evident at this point that I'm not an expert,
> but I'm hoping that you are and that you'll at least be able to tell
> me what I did wrong so as to avoid doing it again, and at best be able
> to tell me how I could recover my data.  At this point I'm confused
> about what happened and how I could have possibly gotten myself in
> this situation.  The RAID array wasn't assembled when I was
> reinstalling Debian so that shouldn't have been able to wipe the
> partition on the array, though it could have wiped sda1, but then, how
> did the partition/superblock on the RAID disappear...
> At present I've installed Ubuntu to the system drive, which in
> hindsight was not an intelligent move because now I don't know what
> version of mdadm I was using on Debian, though I was using 'stable'
> with no apt pinning. I'm running testdisk to analyze the drives, I was
> hoping it would be able to find a backup of the superblock but so far
> all it's finding is HFS partitions which doesn't seem promising.
>
> If anyone can shed any light on what I did wrong, if I encountered
> some kind of known bug or unintentionally did this myself through
> improper use of mdadm, any help at all would be hugely appreciated.
>
> Thanks in advance,
> Jeff.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


Jeff,
   I suspect that you'll need to provide more technical info for the
heavy hitters to give good responses.

   If the devices are recognized by mdadm at all, and if you have
devices ala /dev/md3, etc., then minimally provide the response to a
command like

mdadm -D /dev/md3

   I've had devices recognized but not mounted in the past because the
machine name has changed on the reinstall, etc.

   If the md devices aren't even recognized then minimally try

mdadm -E /dev/sda3 (etc...)

to determine which partitions are part of the RAID, etc. and post back
some of that info.

HTH,
Mark
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* RAID6 ext3 problems
       [not found]     ` <CAKXovcvvTioOP+JKd8b5-Hc1GZhAiqB85=A4d-cCP03uDPiGkg@mail.gmail.com>
@ 2012-02-12 17:47       ` Jeff W
       [not found]       ` <CAK2H+eds+2wu6Nep639mZ1dkOMmxdzqp4_7-o=QJO_qqcTWr9g@mail.gmail.com>
  1 sibling, 0 replies; 9+ messages in thread
From: Jeff W @ 2012-02-12 17:47 UTC (permalink / raw)
  To: linux-raid

On Sun, Feb 12, 2012 at 12:31 PM, Mark Knecht <markknecht@gmail.com> wrote:
> Jeff,
>   I suspect that you'll need to provide more technical info for the
> heavy hitters to give good responses.
Thanks for the feedback Mark!

>
> mdadm -D /dev/md3

/dev/md1:
Version:00.90
Creation Time: Wed Sep 22 01:54:39 2010
Raid Level: raid6
Array Size: 2441437440 (2328.34 GiB 2500.03 GB)
Used Dev Size: 488287488 (465.67 GiB 500.01 GB)
Raid Devices: 7
Total Devices: 6
Preferred Minor: 1
Persistence: Superblock is persistent

Update Time: Sun Feb 12 11:01:35 2012
State: clean, degraded
Active Devices: 6
Working Devices: 6
Failed Devices: 0
Spare Devices: 0

Chunk Size: 64K

UUID: 82b58a02:0ea23fd4:bd4f9dde:33a158c6
Events: 0.5652574

Number     Major     Minor     RaidDevice     State
0     8     0     0     active sync   /dev/sda
1     8     81     1     active sync   /dev/sdf1
2     8     33     2     active sync   /dev/sdc1
3     8     39     3     active sync   /dev/sdd1
4     8     64     4     active sync   /dev/sde
5     8     97     5     active sync   /dev/sdg1
6     0     0     6     removed

>
> HTH,
> Mark
If there is anything else I can provide just ask, thanks for the tip Mark!

Jeff.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: RAID6 ext3 problems
       [not found]       ` <CAK2H+eds+2wu6Nep639mZ1dkOMmxdzqp4_7-o=QJO_qqcTWr9g@mail.gmail.com>
@ 2012-02-12 18:43         ` Jeff W
       [not found]           ` <CAK2H+ee3ERstCsh4ju9kfU0gpFenHqUuMdtjnhFthK8C--NoXw@mail.gmail.com>
  0 siblings, 1 reply; 9+ messages in thread
From: Jeff W @ 2012-02-12 18:43 UTC (permalink / raw)
  To: Mark Knecht; +Cc: linux-raid

On Sun, Feb 12, 2012 at 1:33 PM, Mark Knecht <markknecht@gmail.com> wrote:
> Hi Jeff,
>   In case you start laboring under a false perception, please
> understand I am _not_ a heavy hitter. :-)
>
>   OK, so those results raise two questions for me:
>
> 1) Was it intentional to use all of /dev/sde - I.e. - no partition ala
> sde1- or is that something strange that needs to be investigated?

Using sde instead of sde1 was a mistake I made a long time ago that I
had planned to fix, but a sudden need to relocate put off my plans to
fix it since it was working in the configuration it was in.

>
> 2) For the device that has been removed (possibly /dev/sdb1?) what do
> you get with an --examine command?

The drive that has been removed from the array has been physically
removed from the system, it is unplugged. It was flooding the system
log with error messages while plugged in.

> Also, you might try smartctl on the missing drive to investigate
> whether the internal electronics identified a problem.
>
> HTH,
> Mark

Thanks,
Jeff.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: RAID6 ext3 problems
       [not found]           ` <CAK2H+ee3ERstCsh4ju9kfU0gpFenHqUuMdtjnhFthK8C--NoXw@mail.gmail.com>
@ 2012-02-12 18:57             ` Jeff W
  2012-02-12 20:08               ` Mark Knecht
  0 siblings, 1 reply; 9+ messages in thread
From: Jeff W @ 2012-02-12 18:57 UTC (permalink / raw)
  To: Mark Knecht; +Cc: linux-raid

On Sun, Feb 12, 2012 at 1:51 PM, Mark Knecht <markknecht@gmail.com> wrote:
> OK, so in that case if your RAID looks good then it's good, but I
> guess it's technically degraded until you get a replacement drive in
> it.
Yeah, that's a given. The problem isn't so much with the RAID it's
that I can't mount the partition *on* the RAID, and I don't know why
or how I could have gotten myself in this situation.

> In the meantime, if you've got an external USB/eSATA/network drive
> large enough to back up to then I'd suggest getting a backup complete
> before doing anything else. Just my 2 cents on that one.
Because the data I'm trying to reach is currently only *on* the RAID
(the only other copies were stolen), backing up at this point is
beyond my ability.
Thanks for your feedback and for trying to help though, I appreciate it :)
Jeff.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: RAID6 ext3 problems
  2012-02-12 18:57             ` Jeff W
@ 2012-02-12 20:08               ` Mark Knecht
       [not found]                 ` <CAKXovcuCz9UZrk8+hMsJ_zuST2ZqZBtzhBYewX7ZXA2chruHKg@mail.gmail.com>
  0 siblings, 1 reply; 9+ messages in thread
From: Mark Knecht @ 2012-02-12 20:08 UTC (permalink / raw)
  To: Jeff W; +Cc: linux-raid

On Sun, Feb 12, 2012 at 10:57 AM, Jeff W <jeff.welling@gmail.com> wrote:
> On Sun, Feb 12, 2012 at 1:51 PM, Mark Knecht <markknecht@gmail.com> wrote:
>> OK, so in that case if your RAID looks good then it's good, but I
>> guess it's technically degraded until you get a replacement drive in
>> it.
> Yeah, that's a given. The problem isn't so much with the RAID it's
> that I can't mount the partition *on* the RAID, and I don't know why
> or how I could have gotten myself in this situation.
>
>> In the meantime, if you've got an external USB/eSATA/network drive
>> large enough to back up to then I'd suggest getting a backup complete
>> before doing anything else. Just my 2 cents on that one.
> Because the data I'm trying to reach is currently only *on* the RAID
> (the only other copies were stolen), backing up at this point is
> beyond my ability.
> Thanks for your feedback and for trying to help though, I appreciate it :)
> Jeff.

So what _actually_ happens when you issue the commands? Please post
back the results of

cat /proc/mdstat

and

mount /dev/mdWHATEVER /mountpoint

and let's look at what the system is doing.

HTH,
Mark

^ permalink raw reply	[flat|nested] 9+ messages in thread

* RAID6 ext3 problems
       [not found]                 ` <CAKXovcuCz9UZrk8+hMsJ_zuST2ZqZBtzhBYewX7ZXA2chruHKg@mail.gmail.com>
@ 2012-02-12 20:28                   ` Jeff W
  2012-02-12 20:52                   ` Mark Knecht
  1 sibling, 0 replies; 9+ messages in thread
From: Jeff W @ 2012-02-12 20:28 UTC (permalink / raw)
  To: linux-raid

> So what _actually_ happens when you issue the commands? Please post
> back the results of
>
> cat /proc/mdstat

jeff@Shmee:~$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md1 : active raid6 sda[0] sdf1[1] sdg1[5] sde[4] sdd1[3] sdc1[2]
     2441437440 blocks level 6, 64k chunk, algorithm 2 [7/6] [UUUUUU_]

unused devices: <none>


>
> and
>
> mount /dev/mdWHATEVER /mountpoint

jeff@Shmee:~$ sudo mount /dev/md1 /mnt
[sudo] password for jeff:
mount: wrong fs type, bad option, bad superblock on /dev/md1,
      missing codepage or helper program, or other error
      In some cases useful info is found in syslog - try
      dmesg | tail  or so


jeff@Shmee:~$ dmesg | tail
[48532.584428] [drm] nouveau 0000:01:00.0: Setting dpms mode 0 on vga
encoder (output 0)
[49369.880015] [drm] nouveau 0000:01:00.0: Setting dpms mode 1 on vga
encoder (output 0)
[50470.240278] [drm] nouveau 0000:01:00.0: Setting dpms mode 0 on vga
encoder (output 0)
[50522.284317] usb 2-6: USB disconnect, address 4
[52848.285524] [drm] nouveau 0000:01:00.0: Setting dpms mode 1 on vga
encoder (output 0)
[52940.534516] [drm] nouveau 0000:01:00.0: Setting dpms mode 0 on vga
encoder (output 0)
[53540.724023] [drm] nouveau 0000:01:00.0: Setting dpms mode 1 on vga
encoder (output 0)
[54039.813485] [drm] nouveau 0000:01:00.0: Setting dpms mode 0 on vga
encoder (output 0)
[58210.212241] EXT3-fs error (device md1): ext3_check_descriptors:
Block bitmap for group 6016 not in group (block 277891307)!
[58210.247230] EXT3-fs: group descriptors corrupted!

Is that more helpful?
Thanks!
Jeff.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: RAID6 ext3 problems
       [not found]                 ` <CAKXovcuCz9UZrk8+hMsJ_zuST2ZqZBtzhBYewX7ZXA2chruHKg@mail.gmail.com>
  2012-02-12 20:28                   ` Jeff W
@ 2012-02-12 20:52                   ` Mark Knecht
  2012-02-13  2:28                     ` NeilBrown
  1 sibling, 1 reply; 9+ messages in thread
From: Mark Knecht @ 2012-02-12 20:52 UTC (permalink / raw)
  To: Jeff W; +Cc: Linux-RAID

On Sun, Feb 12, 2012 at 12:27 PM, Jeff W <jeff.welling@gmail.com> wrote:
>>
>> So what _actually_ happens when you issue the commands? Please post
>> back the results of
>>
>> cat /proc/mdstat
>
> jeff@Shmee:~$ cat /proc/mdstat
> Personalities : [raid6] [raid5] [raid4]
> md1 : active raid6 sda[0] sdf1[1] sdg1[5] sde[4] sdd1[3] sdc1[2]
>      2441437440 blocks level 6, 64k chunk, algorithm 2 [7/6] [UUUUUU_]
>
> unused devices: <none>
>
>
>>
>> and
>>
>> mount /dev/mdWHATEVER /mountpoint
>
> jeff@Shmee:~$ sudo mount /dev/md1 /mnt
> [sudo] password for jeff:
> mount: wrong fs type, bad option, bad superblock on /dev/md1,
>       missing codepage or helper program, or other error
>       In some cases useful info is found in syslog - try
>       dmesg | tail  or so
>
>
> jeff@Shmee:~$ dmesg | tail
> [48532.584428] [drm] nouveau 0000:01:00.0: Setting dpms mode 0 on vga
> encoder (output 0)
> [49369.880015] [drm] nouveau 0000:01:00.0: Setting dpms mode 1 on vga
> encoder (output 0)
> [50470.240278] [drm] nouveau 0000:01:00.0: Setting dpms mode 0 on vga
> encoder (output 0)
> [50522.284317] usb 2-6: USB disconnect, address 4
> [52848.285524] [drm] nouveau 0000:01:00.0: Setting dpms mode 1 on vga
> encoder (output 0)
> [52940.534516] [drm] nouveau 0000:01:00.0: Setting dpms mode 0 on vga
> encoder (output 0)
> [53540.724023] [drm] nouveau 0000:01:00.0: Setting dpms mode 1 on vga
> encoder (output 0)
> [54039.813485] [drm] nouveau 0000:01:00.0: Setting dpms mode 0 on vga
> encoder (output 0)
> [58210.212241] EXT3-fs error (device md1): ext3_check_descriptors:
> Block bitmap for group 6016 not in group (block 277891307)!
> [58210.247230] EXT3-fs: group descriptors corrupted!
>
> Is that more helpful?
> Thanks!
> Jeff.

I think it will be to others. It's a bit above my pay grade unfortunately.

Doing a quick Google of

RAID ext3_check_descriptors

turns up others having similar issues to yours (I think) with a few
posts that look like solutions, or possible solutions.

IMPORTANT: My first inclination was to suggest an fsck but one post I
found made it sound like that hurt his RAID so read carefully before
doing anything that might cause more hard.

- Mark
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: RAID6 ext3 problems
  2012-02-12 20:52                   ` Mark Knecht
@ 2012-02-13  2:28                     ` NeilBrown
  0 siblings, 0 replies; 9+ messages in thread
From: NeilBrown @ 2012-02-13  2:28 UTC (permalink / raw)
  To: Mark Knecht; +Cc: Jeff W, Linux-RAID

[-- Attachment #1: Type: text/plain, Size: 3089 bytes --]

On Sun, 12 Feb 2012 12:52:25 -0800 Mark Knecht <markknecht@gmail.com> wrote:

> On Sun, Feb 12, 2012 at 12:27 PM, Jeff W <jeff.welling@gmail.com> wrote:
> >>
> >> So what _actually_ happens when you issue the commands? Please post
> >> back the results of
> >>
> >> cat /proc/mdstat
> >
> > jeff@Shmee:~$ cat /proc/mdstat
> > Personalities : [raid6] [raid5] [raid4]
> > md1 : active raid6 sda[0] sdf1[1] sdg1[5] sde[4] sdd1[3] sdc1[2]
> >      2441437440 blocks level 6, 64k chunk, algorithm 2 [7/6] [UUUUUU_]
> >
> > unused devices: <none>
> >
> >
> >>
> >> and
> >>
> >> mount /dev/mdWHATEVER /mountpoint
> >
> > jeff@Shmee:~$ sudo mount /dev/md1 /mnt
> > [sudo] password for jeff:
> > mount: wrong fs type, bad option, bad superblock on /dev/md1,
> >       missing codepage or helper program, or other error
> >       In some cases useful info is found in syslog - try
> >       dmesg | tail  or so
> >
> >
> > jeff@Shmee:~$ dmesg | tail
> > [48532.584428] [drm] nouveau 0000:01:00.0: Setting dpms mode 0 on vga
> > encoder (output 0)
> > [49369.880015] [drm] nouveau 0000:01:00.0: Setting dpms mode 1 on vga
> > encoder (output 0)
> > [50470.240278] [drm] nouveau 0000:01:00.0: Setting dpms mode 0 on vga
> > encoder (output 0)
> > [50522.284317] usb 2-6: USB disconnect, address 4
> > [52848.285524] [drm] nouveau 0000:01:00.0: Setting dpms mode 1 on vga
> > encoder (output 0)
> > [52940.534516] [drm] nouveau 0000:01:00.0: Setting dpms mode 0 on vga
> > encoder (output 0)
> > [53540.724023] [drm] nouveau 0000:01:00.0: Setting dpms mode 1 on vga
> > encoder (output 0)
> > [54039.813485] [drm] nouveau 0000:01:00.0: Setting dpms mode 0 on vga
> > encoder (output 0)
> > [58210.212241] EXT3-fs error (device md1): ext3_check_descriptors:
> > Block bitmap for group 6016 not in group (block 277891307)!
> > [58210.247230] EXT3-fs: group descriptors corrupted!
> >
> > Is that more helpful?
> > Thanks!
> > Jeff.
> 
> I think it will be to others. It's a bit above my pay grade unfortunately.
> 
> Doing a quick Google of
> 
> RAID ext3_check_descriptors
> 
> turns up others having similar issues to yours (I think) with a few
> posts that look like solutions, or possible solutions.
> 
> IMPORTANT: My first inclination was to suggest an fsck but one post I
> found made it sound like that hurt his RAID so read carefully before
> doing anything that might cause more hard.

fsck is the only thing I can recommend.
Probably use "-n" first so that it only reads and never writes.  Then you can
see how much damage there appears to be before deciding if you want to fix it
(rather than ask for more suggestions).

'sda' must be left over from some other array.. Maybe 'sda1' had the other
device from your current array??

I suspect that when you had the problem that caused you to want to move the
array to a new machine, something corrupted the filesystem a little -
probably not very much.  fsck for ext3 is pretty good at fixing things so
you'll probably get your data back.

NeilBrown

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2012-02-13  2:28 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <CAKXovctDQdiMWEdzh6Jg8OkEjXX5OHZ+hubFV-GRsc8S2QHohQ@mail.gmail.com>
2012-02-12 17:16 ` RAID6 ext3 problems Jeff W
2012-02-12 17:31   ` Mark Knecht
     [not found]     ` <CAKXovcvvTioOP+JKd8b5-Hc1GZhAiqB85=A4d-cCP03uDPiGkg@mail.gmail.com>
2012-02-12 17:47       ` Jeff W
     [not found]       ` <CAK2H+eds+2wu6Nep639mZ1dkOMmxdzqp4_7-o=QJO_qqcTWr9g@mail.gmail.com>
2012-02-12 18:43         ` Jeff W
     [not found]           ` <CAK2H+ee3ERstCsh4ju9kfU0gpFenHqUuMdtjnhFthK8C--NoXw@mail.gmail.com>
2012-02-12 18:57             ` Jeff W
2012-02-12 20:08               ` Mark Knecht
     [not found]                 ` <CAKXovcuCz9UZrk8+hMsJ_zuST2ZqZBtzhBYewX7ZXA2chruHKg@mail.gmail.com>
2012-02-12 20:28                   ` Jeff W
2012-02-12 20:52                   ` Mark Knecht
2012-02-13  2:28                     ` NeilBrown

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).