* resize2fs running out of reserved gdb blocks.
@ 2012-11-12 11:21 Kimmo Mustonen
2012-11-12 14:40 ` Theodore Ts'o
0 siblings, 1 reply; 3+ messages in thread
From: Kimmo Mustonen @ 2012-11-12 11:21 UTC (permalink / raw)
To: linux-ext4; +Cc: k-20121112-81452+linux-ext4
I have an ext4 partition that seems to have used all reserved gdt blocks.
Symptoms:
--8<--8<--
$ sudo time resize2fs -p /dev/sdb
resize2fs 1.42.5 (29-Jul-2012)
Filesystem at /dev/sdb is mounted on /m/nfs/dvb3; on-line resizing required
old_desc_blocks = 1102, new_desc_blocks = 1744
Performing an on-line resize of /dev/sdb to 3656906240 (4k) blocks.
resize2fs: Toiminto ei ole sallittu While trying to add group #87872
Command exited with non-zero status 1
1.06user 23.92system 1:14:43elapsed 0%CPU (0avgtext+0avgdata
2967168maxresident)k
2762520inputs+0outputs (2major+185506minor)pagefaults 0swaps
--8<--8<--
dmesg output
--8<--8<--
[ 5984.360959] EXT4-fs warning (device sdb): ext4_group_add:790: No
reserved GDT blocks, can't resize
--8<--8<--
1) Why did they run out?
2) Is there a way to add more of them?
3) If not, how to recover and/or make sure it doesn't happen again?
History and steps done:
System: Debian 6, Squeeze (stable), 64-bit.
--8<--8<--
The filesystem was originally made on a 2x3TB mirror (usable size 3 TB)
using e2fsprogs-1.42.4 and kernel 2.6.38-2-amd64:
sudo ~/src/e2fsprogs-1.42.4/build/misc/mke2fs \
-O 64bit,has_journal,extents,huge_file,flex_bg,uninit_bg,dir_nlink,extra_isize \
-i 4194304 /dev/sdb
I most probably run
tune2fs -L /m/nfs/dvb3 -m 0 -c 0 -i 0 /dev/sdb
but have no record of this.
Then it was converted to a 3x3TB RAID5 (usable size 6 TB):
sudo /usr/local/sbin/resize2fs-1.42.5 /dev/sdb
At this point the kernel was upgraded to 3.2.0-0.bpo.3-amd64 trying to
resolve one (unrelated) issue but it didn't seem to improve anything.
Then it was converted to a 4x3TB RAID5 (usable size 9 TB):
sudo /usr/local/sbin/resize2fs-1.42.5 /dev/sdb
And now I added two more disks and tried to convert it to a 6x3TB RAID5
(usable size 15 TB) but it failed at 12 TB.
sudo time resize2fs -p /dev/sdb
resize2fs 1.42.5 (29-Jul-2012)
Filesystem at /dev/sdb is mounted on /m/nfs/dvb3; on-line resizing required
old_desc_blocks = 1102, new_desc_blocks = 1744
Performing an on-line resize of /dev/sdb to 3656906240 (4k) blocks.
resize2fs: Toiminto ei ole sallittu While trying to add group #87872
Command exited with non-zero status 1
1.06user 23.92system 1:14:43elapsed 0%CPU (0avgtext+0avgdata
2967168maxresident)k
2762520inputs+0outputs (2major+185506minor)pagefaults 0swaps
and now when trying to (continue) resizing it it fails immediately:
sudo time resize2fs-1.42.6 -p /dev/sdb
resize2fs 1.42.6 (21-Sep-2012)
Filesystem at /dev/sdb is mounted on /m/nfs/dvb3; on-line resizing
required
old_desc_blocks = 1373, new_desc_blocks = 1744
resize2fs-1.42.6: Not enough reserved gdt blocks for resizing
Command exited with non-zero status 1
0.02user 0.01system 0:00.07elapsed 41%CPU (0avgtext+0avgdata
25456maxresident)k
400inputs+0outputs (2major+1640minor)pagefaults 0swaps
--8<--8<--
tune2fs -l doesn't show any gdt entries, probably because there are 0 or
them left:
--8<--8<--
$ sudo tune2fs-1.42.6 -l /dev/sdb
tune2fs 1.42.6 (21-Sep-2012)
Filesystem volume name: /m/nfs/dvb3
Last mounted on: /m/nfs/dvb3
Filesystem UUID: 901df891-a9b7-42d6-828e-f8e4d08dd665
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode
dir_index filetype needs_recovery extent 64bit flex_bg
sparse_super large_file huge_file uninit_bg dir_nlink
extra_isize
Filesystem flags: signed_directory_hash
Default mount options: user_xattr acl
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 2811904
Block count: 2879389696
Reserved block count: 0
Free blocks: 701905763
Free inodes: 2796804
First block: 0
Block size: 4096
Fragment size: 4096
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 32
Inode blocks per group: 2
Flex block group size: 16
Filesystem created: Fri Jun 22 03:08:32 2012
Last mount time: Mon Nov 12 11:44:10 2012
Last write time: Mon Nov 12 11:44:10 2012
Mount count: 1
Maximum mount count: -1
Last checked: Mon Nov 12 09:40:19 2012
Check interval: 0 (<none>)
Lifetime writes: 77 TB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 28
Desired extra isize: 28
Journal inode: 8
Default directory hash: half_md4
Directory Hash Seed: a3addc24-77ec-4154-8dd2-7b70fa0d942b
Journal backup: inode blocks
--8<--8<--
and fsck shows it still looks to be okay:
--8<--8<--
$ sudo /usr/local/sbin/e2fsck-1.42.6 -v -f /dev/sdb
e2fsck 1.42.6 (21-Sep-2012)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
15100 inodes used (0.54%, out of 2811904)
6440 non-contiguous files (42.6%)
24 non-contiguous directories (0.2%)
# of inodes with ind/dind/tind blocks: 0/0/0
Extent depth histogram: 4762/10283/44
2177483933 blocks used (75.62%, out of 2879389696)
0 bad blocks
1047 large files
14864 regular files
224 directories
0 character device files
0 block device files
0 fifos
0 links
3 symbolic links (3 fast symbolic links)
0 sockets
------------
15091 files
--8<--8<--
When this has been resolved, my next plans after this it go increase it
past the 16 TB barrier. Which kernel version should I use for having a
chance to succeed? Any recommended distribution if Debian Stable +
updated kernel/e2fsprogs is a bad choice?
Another thing I have observed. I have about a constant data write of about
50MB/s-90MB/s to that filesystem and whenever I try to resize it online
without moving that write process to another filesystem, it hangs after an
hour or so. It seems like the filesystem is no longer able to perform
write operations on it any more and resizing also stops. It doesn't seem
to affect reading the file system; the files and directories can be
accessed fine. There is not much Dirty or Writeback visible under
/proc/meminfo but still nothing gets written. I have mounted it with
noatime options so that reads do not need to update access times. Those
writing processes cannot be killed, not even with kill -KILL and the
system does not shutdown. Reboot using magic sysreq is needed to recover.
However, now that I know of this resize hang during writes I can live
with it.
Regards,
Kimmo Mustonen
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: resize2fs running out of reserved gdb blocks.
2012-11-12 11:21 resize2fs running out of reserved gdb blocks Kimmo Mustonen
@ 2012-11-12 14:40 ` Theodore Ts'o
2012-11-14 1:07 ` resize2fs running out of reserved gdt blocks Kimmo Mustonen
0 siblings, 1 reply; 3+ messages in thread
From: Theodore Ts'o @ 2012-11-12 14:40 UTC (permalink / raw)
To: Kimmo Mustonen; +Cc: linux-ext4
On Mon, Nov 12, 2012 at 01:21:32PM +0200, Kimmo Mustonen wrote:
> I have an ext4 partition that seems to have used all reserved gdt blocks.
The original on-line resize scheme required gdt blocks to be reserved
so there would be room for the group descriptor table blocks (which
had to be contiguous). The number of group descriptor blocks that was
reserved calculated to be a thousand times the size of the original
file system, which would normally be enough. *But* it was also capped
by a limitation in how we reserved the gdt blocks such that the sum of
the in-use gdt blocks plus reserved gdt blocks could not exceed 1024
blocks. Before the 64-bit option, this was not a problem, since we
would hit the 16T limitation at the same time. However, when the
64-bit option is enabled, it's possible to run out of gdt blocks using
the old scheme; this is the limitation which you hit.
In the very latest kernels, we have added a new on-line resizing
scheme, which does not require that block group descriptors be
contiguous. However, it requires the very latest version of e2fsprogs
(1.42.6) and the to-be-released 3.7 kernel (the new on-line resizing
code landed in 3.7-rc1, during the merge window).
If you are willing to temporarily boot a bleeding edge 3.7 kernel and
use the resize2fs from e2fsprogs 1.42.6, you'll be able to resizing
your partition. After you do this, you could fall back to the Debian
stable kernel and things should work w/o problems; the latest kernel
and e2fsprogs code is needed just for the online resize operation.
The resulting file system will be recognized by older kernels and
e2fsprogs programs.
Regards,
- Ted
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: resize2fs running out of reserved gdt blocks.
2012-11-12 14:40 ` Theodore Ts'o
@ 2012-11-14 1:07 ` Kimmo Mustonen
0 siblings, 0 replies; 3+ messages in thread
From: Kimmo Mustonen @ 2012-11-14 1:07 UTC (permalink / raw)
To: Theodore Ts'o; +Cc: Kimmo Mustonen, linux-ext4
On Mon, 12 Nov 2012, Theodore Ts'o wrote:
> If you are willing to temporarily boot a bleeding edge 3.7 kernel and
> use the resize2fs from e2fsprogs 1.42.6, you'll be able to resizing
> your partition. After you do this, you could fall back to the Debian
After a bit of fighting generating a proper .config file I managed to boot
into 3.7.0-rc5 and resize the partition. And I just noticed that I
accidentally used e2fsprogs 1.42.5 for doing that. Should I be worried
about that? Now back running 3.2.0-0.bpo.3-amd64 kernel. I'm running
e2fsck-1.42.6 to check it now. BTW, resize from 12T to 15T was amazingly
fast (about 30 seconds) now when comparing to the "old" behaviour where it
took hours to complete. Thanks!
> stable kernel and things should work w/o problems; the latest kernel
> and e2fsprogs code is needed just for the online resize operation.
If I continue to increase the array using 3.7.0-rc5 and resize2fs-1.42.6,
when do you expect me to encounter problems or hit the next barrier? Is
going past 16TB safe? How about 32TB? What is the maximum size I can reach
with this setup? The filesystem was made with 64bit flag.
Regards,
Kimmo
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2012-11-14 1:07 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-11-12 11:21 resize2fs running out of reserved gdb blocks Kimmo Mustonen
2012-11-12 14:40 ` Theodore Ts'o
2012-11-14 1:07 ` resize2fs running out of reserved gdt blocks Kimmo Mustonen
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).