From: Tim Chen <tim.c.chen@linux.intel.com>
To: Dave Chinner <david@fromorbit.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>, Jan Kara <jack@suse.cz>,
Dave Chinner <dchinner@redhat.com>,
Dave Hansen <dave.hansen@intel.com>,
Andi Kleen <ak@linux.intel.com>,
Matthew Wilcox <willy@linux.intel.com>,
linux-fsdevel <linux-fsdevel@vger.kernel.org>,
linux-kernel <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] Avoid useless inodes and dentries reclamation
Date: Fri, 30 Aug 2013 09:21:34 -0700 [thread overview]
Message-ID: <1377879694.3625.77.camel@schen9-DESK> (raw)
In-Reply-To: <20130830014005.GT12779@dastard>
On Fri, 2013-08-30 at 11:40 +1000, Dave Chinner wrote:
>
> The new shrinker infrastructure has a ->count_objects() callout to
> specifically return the number of objects in the cache.
> shrink_slab_node() can check that return value against the "minimum
> call count" and determine whether it needs to call ->scan_objects()
> at all.
>
> Actually, the shrinker already behaves like this with the batch_size
> variable - the shrinker has to be asking for more items to be
> scanned than the batch size. That means the problem is that counting
> callouts are causing the problem, not the scanning callouts.
>
> With the new code in the mmotm tree, for counting purposes we
> probably don't need to grab a passive superblock reference at all -
> the superblock and LRUs are guaranteed to be valid if the shrinker
> is currently running, but we don't really care if the superblock is
> being shutdown and the values that come back are invalid because the
> ->scan_objects() callout will fail to grab the superblock to do
> anything with the calculated values.
If that's the case, then we should remove grab_super_passive
from the super_cache_count code. That should remove the bottleneck
in reclamation.
Thanks for your detailed explanation.
Tim
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
---
diff --git a/fs/super.c b/fs/super.c
index 73d0952..4df1fab 100644
--- a/fs/super.c
+++ b/fs/super.c
@@ -112,9 +112,6 @@ static unsigned long super_cache_count(struct shrinker *shrink,
sb = container_of(shrink, struct super_block, s_shrink);
- if (!grab_super_passive(sb))
- return 0;
-
if (sb->s_op && sb->s_op->nr_cached_objects)
total_objects = sb->s_op->nr_cached_objects(sb,
sc->nid);
next prev parent reply other threads:[~2013-08-30 16:21 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-08-28 21:52 [PATCH] Avoid useless inodes and dentries reclamation Tim Chen
2013-08-28 21:19 ` Kirill A. Shutemov
2013-08-28 22:54 ` Tim Chen
2013-08-29 11:07 ` Dave Chinner
2013-08-29 18:07 ` Tim Chen
2013-08-29 18:36 ` Dave Hansen
2013-08-30 1:56 ` Dave Chinner
2013-08-30 1:40 ` Dave Chinner
2013-08-30 16:21 ` Tim Chen [this message]
2013-08-31 9:00 ` Dave Chinner
2013-09-03 18:38 ` Tim Chen
2013-09-06 0:55 ` Dave Chinner
2013-09-06 18:26 ` Tim Chen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1377879694.3625.77.camel@schen9-DESK \
--to=tim.c.chen@linux.intel.com \
--cc=ak@linux.intel.com \
--cc=dave.hansen@intel.com \
--cc=david@fromorbit.com \
--cc=dchinner@redhat.com \
--cc=jack@suse.cz \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=viro@zeniv.linux.org.uk \
--cc=willy@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).