linux-ext4.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* How full should the inode table be?
@ 2012-11-11  9:55 George Spelvin
  2012-11-12 16:30 ` Theodore Ts'o
  0 siblings, 1 reply; 2+ messages in thread
From: George Spelvin @ 2012-11-11  9:55 UTC (permalink / raw)
  To: linux-ext4; +Cc: linux

I have an ext4 file system which was formatted with the default number
of bytes per inode, leading to a lot of wasted inodes:

Filesystem      1K-blocks       Used  Available Use% Mounted on
/dev/md0       9728762072 6902736072 2337647668  75% /data
Filesystem        Inodes   IUsed     IFree IUse% Mounted on
/dev/md0       152619008 2012348 150606660    2% /data

Now, it turns out that I have to rebuild it with 64-bit block numbers
in order to grow it past 16 TB (wow, was *that* a nasty surprise),
and I intend to use a somewhat saner bytes/inode ratio.

(Ignoring the slight space gain, fewer inodes means faster e2fsck.)

Now, the current data, which is a decent model for future data, is
running 3512514 bytes/inode.

I could just use that, so the FS will run out of data blocks at about
the same time as it runs out of inodes, but I wonder: does the FS benefit
from more slack in inode allocation?

Given that accessing all the inodes in a directory is much more common
than scanning all the data in a directory, perhaps reducing fragmentation
in the inode table has a significant performance benefit.

I.e. perhaps an 80% full inode table causes more problems than an 80%
full disk, and I should try to leave more free space.

Allocating 2x the inodes I think I'll need doesn't cost very much,
after all.  256 additional bytes of inode per 3512514 bytes is only 0.007%
overhead, after all.


STFWing a bit, I see lots of people applying fidge factors of from 1.2
to 4 to the measured bytes/inode to get the -i argument.  But I don't
see any real justification for the numbers.
Any advice?

Many thanks!

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: How full should the inode table be?
  2012-11-11  9:55 How full should the inode table be? George Spelvin
@ 2012-11-12 16:30 ` Theodore Ts'o
  0 siblings, 0 replies; 2+ messages in thread
From: Theodore Ts'o @ 2012-11-12 16:30 UTC (permalink / raw)
  To: George Spelvin; +Cc: linux-ext4

On Sun, Nov 11, 2012 at 04:55:12AM -0500, George Spelvin wrote:
> Now, it turns out that I have to rebuild it with 64-bit block numbers
> in order to grow it past 16 TB (wow, was *that* a nasty surprise),
> and I intend to use a somewhat saner bytes/inode ratio.
> 
> (Ignoring the slight space gain, fewer inodes means faster e2fsck.)

Actually, with ext4, we keep track of the last used inode in each
block group, so there isn't a speed gain for using a smaller number of
inodes.  It did make a difference for ext3, but not for ext4.

> I could just use that, so the FS will run out of data blocks at about
> the same time as it runs out of inodes, but I wonder: does the FS benefit
> from more slack in inode allocation?

The file system doesn't actually gain anything one way or another in
terms of slack space in the inode table.  The major downside is that
if you guess wrong, and you have many more smaller files than you had
estimated, there's no way to change the inode ratio afterwards, sort
of backing up and reformatting.  So that's why historically we've
tended to massively overprivision the number of inodes available to
the file system.

Regards,

						- Ted

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2012-11-12 16:30 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-11-11  9:55 How full should the inode table be? George Spelvin
2012-11-12 16:30 ` Theodore Ts'o

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).