linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Nick Piggin <npiggin@suse.de>
To: Dave Chinner <david@fromorbit.com>
Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org, xfs@oss.sgi.com
Subject: Re: [PATCH 1/5] inode: Make unused inode LRU per superblock
Date: Thu, 27 May 2010 14:23:32 +1000	[thread overview]
Message-ID: <20100527042332.GH22536@laptop> (raw)
In-Reply-To: <20100527040210.GI12087@dastard>

On Thu, May 27, 2010 at 02:02:10PM +1000, Dave Chinner wrote:
> On Thu, May 27, 2010 at 12:04:45PM +1000, Nick Piggin wrote:
> > On Thu, May 27, 2010 at 09:01:29AM +1000, Dave Chinner wrote:
> > > On Thu, May 27, 2010 at 02:17:33AM +1000, Nick Piggin wrote:
> > > > On Tue, May 25, 2010 at 06:53:04PM +1000, Dave Chinner wrote:
> > > > > From: Dave Chinner <dchinner@redhat.com>
> > > > > 
> > > > > The inode unused list is currently a global LRU. This does not match
> > > > > the other global filesystem cache - the dentry cache - which uses
> > > > > per-superblock LRU lists. Hence we have related filesystem object
> > > > > types using different LRU reclaimatin schemes.
> > > > 
> > > > Is this an improvement I wonder? The dcache is using per sb lists
> > > > because it specifically requires sb traversal.
> > > 
> > > Right - I originally implemented the per-sb dentry lists for
> > > scalability purposes. i.e. to avoid monopolising the dentry_lock
> > > during unmount looking for dentries on a specific sb and hanging the
> > > system for several minutes.
> > > 
> > > However, the reason for doing this to the inode cache is not for
> > > scalability, it's because we have a tight relationship between the
> > > dentry and inode cacheѕ. That is, reclaim from the dentry LRU grows
> > > the inode LRU.  Like the registration of the shrinkers, this is kind
> > > of an implicit, undocumented behavour of the current shrinker
> > > implemenation.
> > 
> > Right, that's why I wonder whether it is an improvement. It would
> > be interesting to see some tests (showing at least parity).
> 
> I've done some testing showing parity. They've been along the lines
> of:
> 	- populate cache with 1m dentries + inodes
> 	- run 'time echo 2 > /proc/sys/vm/drop_caches'
> 
> I've used different methods of populating the caches to have them
> non-sequential in the LRU (i.e. trigger fragmentation), have dirty
> backing inodes (e.g. the VFS inode clean, the xfs inode dirty
> because transactions haven't completed), etc.
> 
> The variation on the test is around +-10%, with the per-sb shrinkers
> averaging about 5% lower time to reclaim. This is within the error
> margin of the test, so it's not really a conclusive win, but it is
> certainly shows that it does not slow anything down. If you've got a
> better way to test it, then I'm all ears....

I guess the problem is that inode LRU cache isn't very useful as
long as there are dentries in the way (which is most of the time,
isn't it?). I think nfsd will exercise them better? Dont know of
any other cases.


> > Right, it just makes it harder to do. By much harder, I did mostly mean
> > the extra memory overhead.
> 
> You've still got to allocate that extra memory on the per-sb dentry
> LRUs so it's not really a valid argument.

Well it would be per-zone, per-sb list, but I don't think that
makes it an ivalid point.


> IOWs, if it's too much
> memory for per-sb inode LRUs, then it's too much memory for the
> per-sb dentry LRUs as well...

Not about how much is too much, it's about more cost or memory
usage for what benefit? I guess it isn't a lot more memory though.

 
> > If there is *no* benefit from doing per-sb
> > icache then I would question whether we should.
> 
> The same vague questions wondering about the benefit of per-sb
> dentry LRUs were raised when I first proposed them years ago, and
> look where we are now.

To be fair that is because there were specific needs to do per-sb
pruning. This isn't the case with icache.


>  Besides, focussing on whether this one patch
> is a benefit or not is really missing the point because it's the
> benefits of this patchset as a whole that need to be considered....

I would indeed like to focus on the benefits of the patchset as a
whole. Leaving aside the xfs changes, it would be interesting to
have at least a few numbers for dcache/icache heavy workloads.


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2010-05-27  4:23 UTC|newest]

Thread overview: 39+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-05-25  8:53 [PATCH 0/5] Per superblock shrinkers V2 Dave Chinner
2010-05-25  8:53 ` [PATCH 1/5] inode: Make unused inode LRU per superblock Dave Chinner
2010-05-26 16:17   ` Nick Piggin
2010-05-26 23:01     ` Dave Chinner
2010-05-27  2:04       ` Nick Piggin
2010-05-27  4:02         ` Dave Chinner
2010-05-27  4:23           ` Nick Piggin [this message]
2010-05-27 20:32   ` Andrew Morton
2010-05-27 22:54     ` Dave Chinner
2010-05-28 10:07       ` Nick Piggin
2010-05-25  8:53 ` [PATCH 2/5] mm: add context argument to shrinker callback Dave Chinner
2010-05-25  8:53 ` [PATCH 3/5] superblock: introduce per-sb cache shrinker infrastructure Dave Chinner
2010-05-26 16:41   ` Nick Piggin
2010-05-26 23:12     ` Dave Chinner
2010-05-27  1:53       ` [PATCH 3/5 v2] " Dave Chinner
2010-05-27  4:01         ` Al Viro
2010-05-27  6:17           ` Dave Chinner
2010-05-27  6:46             ` Nick Piggin
2010-05-27  2:19       ` [PATCH 3/5] " Nick Piggin
2010-05-27  4:07         ` Dave Chinner
2010-05-27  4:24           ` Nick Piggin
2010-05-27  6:35   ` Nick Piggin
2010-05-27 22:40     ` Dave Chinner
2010-05-28  5:19       ` Nick Piggin
2010-05-31  6:39         ` Dave Chinner
2010-05-31  7:28           ` Nick Piggin
2010-05-27 20:32   ` Andrew Morton
2010-05-27 23:01     ` Dave Chinner
2010-05-25  8:53 ` [PATCH 4/5] superblock: add filesystem shrinker operations Dave Chinner
2010-05-27 20:32   ` Andrew Morton
2010-05-25  8:53 ` [PATCH 5/5] xfs: make use of new shrinker callout Dave Chinner
2010-05-26 16:44 ` [PATCH 0/5] Per superblock shrinkers V2 Nick Piggin
2010-05-27 20:32 ` Andrew Morton
2010-05-28  0:30   ` Dave Chinner
2010-05-28  7:42   ` Artem Bityutskiy
2010-07-02 12:13 ` Christoph Hellwig
2010-07-12  2:41   ` Dave Chinner
2010-07-12  2:52     ` Christoph Hellwig
  -- strict thread matches above, loose matches on Subject: below --
2010-05-14  7:24 [PATCH 0/5] Per-superblock shrinkers Dave Chinner
2010-05-14  7:24 ` [PATCH 1/5] inode: Make unused inode LRU per superblock Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20100527042332.GH22536@laptop \
    --to=npiggin@suse.de \
    --cc=david@fromorbit.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).