From: Dave Chinner <david@fromorbit.com>
To: Christoph Hellwig <hch@infradead.org>
Cc: viro@zeniv.linux.org.uk, xfs@oss.sgi.com
Subject: Re: [PATCH 01/11] xfs: we don't need no steekin ->evict_inode
Date: Thu, 14 Apr 2016 07:20:41 +1000 [thread overview]
Message-ID: <20160413212041.GQ567@dastard> (raw)
In-Reply-To: <20160413164110.GA8475@infradead.org>
On Wed, Apr 13, 2016 at 09:41:10AM -0700, Christoph Hellwig wrote:
> Al has been very unhappy about our destroy_inode abuse, and I'd
> reluctant to make it worse.
I don't have any problems with it at all. The VFS doesn't care
how we manage inode allocation or destruction, so I don't see
any problem with what we do outside the visibility of the VFS inode
life cycle.
> Why do we need to play games with i_mode when freeing?
Because the inode cache lookup XFS code uses i_mode == 0 to detect a
freed inode. i.e. in xfs_iget_cache_miss/xfs_iget_cache_hit this is
used to allow XFS_IGET_CREATE to return a freed inodes still in the
cache during inode allocation. This is the only case where we are
allowed to find a freed inode in the cache lookup, so we have to be
able to detect it somehow.
Similarly, I'm pretty sure there are assumptions all through the XFS
code (both kernel and userspace) that i_mode = 0 means the inode is
free/unallocated. xfs_repair, for example, makes this assumption,
and so we have to zero the mode when freeing the inode...
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2016-04-13 21:20 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-04-13 5:31 [PATCH 00/11 v3] xfs: inode reclaim vs the world Dave Chinner
2016-04-13 5:31 ` [PATCH 01/11] xfs: we don't need no steekin ->evict_inode Dave Chinner
2016-04-13 16:41 ` Christoph Hellwig
2016-04-13 21:20 ` Dave Chinner [this message]
2016-04-14 12:10 ` Brian Foster
2016-04-13 5:31 ` [PATCH 02/11] xfs: xfs_iflush_cluster fails to abort on error Dave Chinner
2016-04-13 16:41 ` Christoph Hellwig
2016-04-13 5:31 ` [PATCH 03/11] xfs: fix inode validity check in xfs_iflush_cluster Dave Chinner
2016-04-13 5:31 ` [PATCH 04/11] xfs: skip stale inodes " Dave Chinner
2016-04-13 5:31 ` [PATCH 05/11] xfs: optimise xfs_iext_destroy Dave Chinner
2016-04-13 16:45 ` Christoph Hellwig
2016-04-13 5:31 ` [PATCH 06/11] xfs: xfs_inode_free() isn't RCU safe Dave Chinner
2016-04-13 5:31 ` [PATCH 07/11] xfs: mark reclaimed inodes invalid earlier Dave Chinner
2016-04-13 6:49 ` Dave Chinner
2016-04-14 12:10 ` Brian Foster
2016-04-14 23:31 ` Dave Chinner
2016-04-15 12:46 ` Brian Foster
2016-04-13 5:31 ` [PATCH 08/11] xfs: xfs_iflush_cluster has range issues Dave Chinner
2016-04-13 5:31 ` [PATCH 09/11] xfs: rename variables in xfs_iflush_cluster for clarity Dave Chinner
2016-04-13 5:31 ` [PATCH 10/11] xfs: simplify inode reclaim tagging interfaces Dave Chinner
2016-04-14 12:10 ` Brian Foster
2016-06-29 4:21 ` Darrick J. Wong
2016-04-13 5:31 ` [PATCH 11/11] xfs: move reclaim tagging functions Dave Chinner
2016-04-14 12:11 ` Brian Foster
2016-04-13 15:38 ` [PATCH 00/11 v3] xfs: inode reclaim vs the world Darrick J. Wong
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160413212041.GQ567@dastard \
--to=david@fromorbit.com \
--cc=hch@infradead.org \
--cc=viro@zeniv.linux.org.uk \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox