linux-ext4.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Theodore Ts'o <tytso@mit.edu>
To: Andreas Dilger <adilger@dilger.ca>
Cc: Li Xi <pkuelelixi@gmail.com>, linux-ext4@vger.kernel.org
Subject: Re: [v4 1/6] Always read full inode structure
Date: Sun, 6 Mar 2016 14:31:34 -0500	[thread overview]
Message-ID: <20160306193134.GN10297@thunk.org> (raw)
In-Reply-To: <3DEDB39A-8511-470C-A438-726E2672B821@dilger.ca>

On Sat, Mar 05, 2016 at 11:27:25PM -0700, Andreas Dilger wrote:
> Do you think it really makes e2fsprogs less efficient?  The disk IO has
> already happened, and definitely included the whole inode even if only
> the small inode data was requested.  The ext2fs block cache will still
> cache the whole inode block, so fetching the whole inode is no overhead.

I'm concerned about all of the extra memory allocation and
deallocation that we would need to do.  If you have a million inodes,
that's a million malloc()'s and free()'s.


> In contrast, several places in the code are doing extra work to fetch
> the large inode data after having fetched the small inode data.  It is
> also fairly confusing in different parts of the code which "know" that
> the inode pointer is pointing to a full inode buffer, so it is a lot
> cleaner if we just always read the full inode data everywhere.

Can you point at some of these places?  See below, but I think it's a
lot more complicated to do what you are suggested.

> Even better would be if the API explicitly just passed ext4_inode_large
> everywhere, which wouldn't break the ABI, but it might cause problems
> for anything that encodes the argument types (e.g. C++).  At least if
> the e2fsprogs internal functions are reading the full inode the code is
> easier to understand.

For the inode structure, for better or for worse, we have a "caller
allocates" convention.  So we can't just fill in the full inode unless
the caller explicitly requests it, and tells us how much space it has
available.

Also, if the caller passes in a pointer to struct ext2_inode, the
library can't assume it's a full inode.  Fortunately, in the vast
majority of the places where the library needs to look at the inode,
it doesn't need to look at the full inode.

Cheers,

					- Ted

  reply	other threads:[~2016-03-06 19:31 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-03-06  4:14 [v4 0/6] Add project quota support for e2fsprogs Li Xi
2016-03-06  4:14 ` [v4 1/6] Always read full inode structure Li Xi
2016-03-06  5:46   ` Theodore Ts'o
2016-03-06  6:27     ` Andreas Dilger
2016-03-06 19:31       ` Theodore Ts'o [this message]
2016-03-06  4:14 ` [v4 2/6] Clean up codes for adding new quota type Li Xi
2016-03-06  4:14 ` [v4 3/6] Add project feature flag EXT4_FEATURE_RO_COMPAT_PROJECT Li Xi
2016-03-06  4:14 ` [v4 4/6] Add project quota support Li Xi
2016-03-06  4:14 ` [v4 5/6] Add inherit flags for project quota Li Xi
2016-03-06  4:14 ` [v4 6/6] Add project ID support for chattr/lsattr Li Xi
2016-03-06  5:56   ` Theodore Ts'o
2016-03-06 10:49     ` Li Xi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160306193134.GN10297@thunk.org \
    --to=tytso@mit.edu \
    --cc=adilger@dilger.ca \
    --cc=linux-ext4@vger.kernel.org \
    --cc=pkuelelixi@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).