From: Theodore Ts'o <tytso@mit.edu>
To: Brad Campbell <lists2009@fnarfbargle.com>
Cc: Azat Khuzhin <a3at.mail@gmail.com>, linux-ext4@vger.kernel.org
Subject: Re: Online resize issue with 3.13.5 & 3.15.6
Date: Sat, 26 Jul 2014 09:56:52 -0400 [thread overview]
Message-ID: <20140726135652.GC6725@thunk.org> (raw)
In-Reply-To: <53D3B12C.5040703@fnarfbargle.com>
On Sat, Jul 26, 2014 at 09:46:20PM +0800, Brad Campbell wrote:
> This was the first resize of this FS. Initially this array was about 15T.
> About 12 months ago I attempted to resize it up to 19T and bumped up against
> the fact I had not created the initial filesystem with 64 bit support, so I
> cobbled together some storage and did a backup/create/restore. At that point
> I would probably have specified resize_inode manually as an option (as
> reading the man page it looked like a good idea as I always had plans to
> expand in future) to mke2fs along with 64bit. Fast forward 12 months and
> I've added 2 drives to the array and bumped up against this issue. So it was
> initially 4883458240 blocks. It would have been created with e2fsprogs from
> Debian Stable (so 1.42.5).
So mke2fs 1.42.11 does the right thing (although it really should just
tell you that there's no point using the resize_inode).
% mke2fs -Fq -t ext4 -O resize_inode,64bit /mnt/foo.img 19T
/mnt/foo.img contains a ext4 file system
created on Sat Jul 26 09:54:30 2014
% dumpe2fs -h /mnt/foo.img | grep "Reserved GDT"
dumpe2fs 1.42.11 (09-Jul-2014)
% debugfs -R "stat <7>" /mnt/foo.img
debugfs 1.42.11 (09-Jul-2014)
Inode: 7 Type: bad type Mode: 0000 Flags: 0x0
Generation: 0 Version: 0x00000000
User: 0 Group: 0 Size: 0
File ACL: 0 Directory ACL: 0
Links: 0 Blockcount: 0
Fragment: Address: 0 Number: 0 Size: 0
ctime: 0x00000000 -- Wed Dec 31 19:00:00 1969
atime: 0x00000000 -- Wed Dec 31 19:00:00 1969
mtime: 0x00000000 -- Wed Dec 31 19:00:00 1969
Size of extra inode fields: 0
BLOCKS:
> I can't test this to verify my memory however as I don't seem to be able to
> create a sparse file large enough to create a filesystem in. I appear to be
> bumping up against a 2T filesize limit.
Yep, when I do this testing I create a loopback mounted (sparse) xfs
file system, so I can create the sparse files needed to do this sort
of testing.
My guess is that 1.42.5 is not doing the right thing, although I
haven't had a chance to test it yet.
- Ted
next prev parent reply other threads:[~2014-07-26 13:57 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-07-20 11:26 Online resize issue with 3.13.5 & 3.15.6 Brad Campbell
2014-07-21 1:03 ` Brad Campbell
2014-07-25 4:33 ` Brad Campbell
2014-07-25 8:13 ` Azat Khuzhin
2014-07-25 11:44 ` Brad Campbell
2014-07-25 14:07 ` Theodore Ts'o
2014-07-26 3:31 ` Brad Campbell
2014-07-26 4:12 ` Brad Campbell
2014-07-26 7:04 ` Azat Khuzhin
2014-07-26 7:45 ` Azat Khuzhin
2014-07-26 12:45 ` Theodore Ts'o
2014-07-26 12:57 ` Azat Khuzhin
2014-07-26 13:46 ` Brad Campbell
2014-07-26 13:56 ` Theodore Ts'o [this message]
2014-07-29 2:46 ` Theodore Ts'o
2014-07-29 8:00 ` Brad Campbell
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20140726135652.GC6725@thunk.org \
--to=tytso@mit.edu \
--cc=a3at.mail@gmail.com \
--cc=linux-ext4@vger.kernel.org \
--cc=lists2009@fnarfbargle.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).