linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Andris Berzins" <pkix@inbox.lv>
To: Robin Hill <robin@robinhill.me.uk>
Cc: linux-raid@vger.kernel.org
Subject: Re: failed raid re-create changed dev size
Date: Thu, 13 Dec 2012 18:13:54 +0200	[thread overview]
Message-ID: <1355415234.50c9fec2e9592@mail.inbox.lv> (raw)
In-Reply-To: <20121213093040.GA17294@cthulhu.home.robinhill.me.uk>

Quoting "Robin Hill" <robin@robinhill.me.uk>:
> On Wed Dec 12, 2012 at 06:10:59PM +0200, Andris Berzins wrote:
> 
>>>> Is it possible that no data was damaged? It is LUKS partition, i
>>>> mapped it and run "fsck -n" on underlying ext3 partition,
>>>> but fsck returned immediately with status "clean".
>>>> 
>>> By default fsck will just check whether the filesystem is marked as
>>> dirty/clean and just skip running if it's clean. You'll need to use "-f"
>>> to force it to run.
>> 
>> It seems that something got damaged. I have several traces as shown
>> below in dmesg.
>> 
>> Tried to run "fsck -f -n" but it looks that it will take several month
>> on this 15TB fs with billion files.
>> Any ideas?
>> 
> Sorry, no. You've got a corrupted filesystem and fsck is the tool to fix
> that. If you're certain that the array is now set up correctly (which it
> probably is if LUKS is able to map it okay), then you can skip the "-n"
> pass and proceed straight to repair. Depending on memory, you may also
> want to look into setting up scratch_files in e2fsck.conf as it can suck
> up a lot of memory for large filesystems. You may also want to look into
> moving to ext4 once you've got the filesystem fixed - fsck times should
> be much lower than with ext3.

Thank you for suggestions!
fsck finished sooner than I thought. :)
Very interesting. Turns out that this raid recreation with wrong offset did not damage underlying file system?

# fsck -f -n /dev/mapper/data
fsck from util-linux 2.20.1
e2fsck 1.42 (29-Nov-2011)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/mapper/data: 121432484/457854976 files (0.1% non-contiguous), 2827329571/3662830720 blocks




> 
> The only other option would be to reformat and restore from backup.
> 
> Good luck,
> Robin
> --
> ___
> ( ' }     |       Robin Hill        <robin@robinhill.me.uk> |
> / / )      | Little Jim says ....                            |
> // !!       |      "He fallen in de water !!"                 |


  reply	other threads:[~2012-12-13 16:13 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-12-10 23:37 failed raid re-create changed dev size Andris Berzins
2012-12-11  8:15 ` Mikael Abrahamsson
2012-12-11  9:01   ` Andris Berzins
2012-12-11  9:27     ` Robin Hill
2012-12-11 14:35       ` Andris Berzins
2012-12-11 14:49         ` Robin Hill
2012-12-12 16:10           ` Andris Berzins
2012-12-13  9:30             ` Robin Hill
2012-12-13 16:13               ` Andris Berzins [this message]
2012-12-12  1:04         ` Brad Campbell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1355415234.50c9fec2e9592@mail.inbox.lv \
    --to=pkix@inbox.lv \
    --cc=linux-raid@vger.kernel.org \
    --cc=robin@robinhill.me.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).