linux-ext4.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jan Kara <jack@suse.cz>
To: Denis 'GNUtoo' Carikli <GNUtoo@no-log.org>
Cc: linux-ext4@vger.kernel.org
Subject: Re: very long fsck(weeks) for a small(500GB) ext4 partition.
Date: Tue, 22 Jan 2013 14:23:51 +0100	[thread overview]
Message-ID: <20130122132351.GA28331@quack.suse.cz> (raw)
In-Reply-To: <20130119174044.53b84456@x60.lan>

  Hello,

On Sat 19-01-13 17:40:44, Denis 'GNUtoo' Carikli wrote:
> I was shrinking my ext4 partition with resize2fs when the electricity
> went off.
  Heh, bad luck...

> Then I launched an fsck against my partition which is
> at /dev/mapper/something because it's an opened LUKS partition.
> The partition is a bit less than 500GB in size because it's a 500GB hdd.
  When something really bad happens, as it happened to you, it is a good
idea to spend some more time, find some external hard drive or spare space
elsewhere and backup the corrupted device. When you later come to a
situation like "Oh, I forgot to give -y to fsck", you can just stop it
without thinking twice and you will be glad you spent the extra time with
finding free space.

> Now the problem is that it's checking the filesystem since more than
> 433 hours according to htop.
> 
> since I've forgetten to add -y in the fsck command I've something
> mechanical that keep pressing y on the keyboard all the time.
  ;)

> here's what fsck says at the time of writing the email:
> File ... (inode #129283, mod time Tue Sep 25 21:09:35 2012) 
>   has 3 multiply-claimed block(s), shared with 1 file(s):
> /home/gnutoo/networking/SDR/uhd.20121031204636/host/build/docs/doxygen/???/structuhd_1_1not__implemented__error.html 
> (inode #14946569, mod time Tue Sep 25 21:09:35 2012)
> Clone multiply-claimed blocks<y>? yes
  I suppose the first inode number is increasing right? As you can see you
are now at inode 129283. I estimate your filesystem has about 31250000
inodes (chosen by mke2fs with default settings for your fs size) and although
not all of them are used by far, it will take years before the fsck will
finish. So I think you can terminate it with peace in mind. There's no
point in letting it run.
 
> 1) Can I stop the fsck and continue it later?
  You can run it again (with -y this time). It shouldn't do much harm
AFAIK.

> 2) I've no idea how much inodes there are in the filesystem but I fear
> that it it's only at the beginning of it and will last forever.
  Yup.

> 3) If I stop, and mount it, maybe I'll be able to copy the data?
  Maybe. It's definitely worth a try. Make sure to mount read-only.

> e2fsprogs are at version 1.41.14-1ubuntu3.
  And using newer e2fsprogs may give better results as well.

								Honza
-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

  reply	other threads:[~2013-01-22 13:23 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-01-19 16:40 very long fsck(weeks) for a small(500GB) ext4 partition Denis 'GNUtoo' Carikli
2013-01-22 13:23 ` Jan Kara [this message]
2013-01-23 14:59   ` Denis 'GNUtoo' Carikli

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20130122132351.GA28331@quack.suse.cz \
    --to=jack@suse.cz \
    --cc=GNUtoo@no-log.org \
    --cc=linux-ext4@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).