From: Glauber Costa <glommer-GEFAQzZX7r8dnm+yROfE0A@public.gmane.org>
To: Andrew Morton <akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
Cc: <linux-fsdevel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
Mel Gorman <mgorman-l3A5Bk7waGM@public.gmane.org>,
Dave Chinner <david-FqsqvQoI3Ljby3iVrkZq2A@public.gmane.org>,
<linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org>,
<cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
<kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>,
Michal Hocko <mhocko-AlSwsSmVLrQ@public.gmane.org>,
Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>,
hughd-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org,
Greg Thelen <gthelen-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>,
Dave Chinner <dchinner-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
Subject: [PATCH v9 05/35] dcache: remove dentries from LRU before putting on dispose list
Date: Thu, 30 May 2013 14:35:51 +0400 [thread overview]
Message-ID: <1369910181-20026-6-git-send-email-glommer@openvz.org> (raw)
In-Reply-To: <1369910181-20026-1-git-send-email-glommer-GEFAQzZX7r8dnm+yROfE0A@public.gmane.org>
From: Dave Chinner <dchinner-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
One of the big problems with modifying the way the dcache shrinker
and LRU implementation works is that the LRU is abused in several
ways. One of these is shrink_dentry_list().
Basically, we can move a dentry off the LRU onto a different list
without doing any accounting changes, and then use dentry_lru_prune()
to remove it from what-ever list it is now on to do the LRU
accounting at that point.
This makes it -really hard- to change the LRU implementation. The
use of the per-sb LRU lock serialises movement of the dentries
between the different lists and the removal of them, and this is the
only reason that it works. If we want to break up the dentry LRU
lock and lists into, say, per-node lists, we remove the only
serialisation that allows this lru list/dispose list abuse to work.
To make this work effectively, the dispose list has to be isolated
from the LRU list - dentries have to be removed from the LRU
*before* being placed on the dispose list. This means that the LRU
accounting and isolation is completed before disposal is started,
and that means we can change the LRU implementation freely in
future.
This means that dentries *must* be marked with DCACHE_SHRINK_LIST
when they are placed on the dispose list so that we don't think that
parent dentries found in try_prune_one_dentry() are on the LRU when
the are actually on the dispose list. This would result in
accounting the dentry to the LRU a second time. Hence
dentry_lru_del() has to handle the DCACHE_SHRINK_LIST case
differently because the dentry isn't on the LRU list.
[ v2: don't decrement nr unused twice, spotted by Sha Zhengju ]
[ v7: (dchinner)
- shrink list leaks dentries when inode/parent can't be locked in
dentry_kill().
- remove the readdition of dentry_lru_prune(). ]
Signed-off-by: Dave Chinner <dchinner-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
---
fs/dcache.c | 98 ++++++++++++++++++++++++++++++++++++++++++++++++-------------
1 file changed, 77 insertions(+), 21 deletions(-)
diff --git a/fs/dcache.c b/fs/dcache.c
index 9d8ec4a..03d0c21 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -315,7 +315,7 @@ static void dentry_unlink_inode(struct dentry * dentry)
}
/*
- * dentry_lru_(add|del|prune|move_tail) must be called with d_lock held.
+ * dentry_lru_(add|del|move_list) must be called with d_lock held.
*/
static void dentry_lru_add(struct dentry *dentry)
{
@@ -331,16 +331,25 @@ static void dentry_lru_add(struct dentry *dentry)
static void __dentry_lru_del(struct dentry *dentry)
{
list_del_init(&dentry->d_lru);
- dentry->d_flags &= ~DCACHE_SHRINK_LIST;
dentry->d_sb->s_nr_dentry_unused--;
this_cpu_dec(nr_dentry_unused);
}
/*
* Remove a dentry with references from the LRU.
+ *
+ * If we are on the shrink list, then we can get to try_prune_one_dentry() and
+ * lose our last reference through the parent walk. In this case, we need to
+ * remove ourselves from the shrink list, not the LRU.
*/
static void dentry_lru_del(struct dentry *dentry)
{
+ if (dentry->d_flags & DCACHE_SHRINK_LIST) {
+ list_del_init(&dentry->d_lru);
+ dentry->d_flags &= ~DCACHE_SHRINK_LIST;
+ return;
+ }
+
if (!list_empty(&dentry->d_lru)) {
spin_lock(&dentry->d_sb->s_dentry_lru_lock);
__dentry_lru_del(dentry);
@@ -350,13 +359,15 @@ static void dentry_lru_del(struct dentry *dentry)
static void dentry_lru_move_list(struct dentry *dentry, struct list_head *list)
{
+ BUG_ON(dentry->d_flags & DCACHE_SHRINK_LIST);
+
spin_lock(&dentry->d_sb->s_dentry_lru_lock);
if (list_empty(&dentry->d_lru)) {
list_add_tail(&dentry->d_lru, list);
- dentry->d_sb->s_nr_dentry_unused++;
- this_cpu_inc(nr_dentry_unused);
} else {
list_move_tail(&dentry->d_lru, list);
+ dentry->d_sb->s_nr_dentry_unused--;
+ this_cpu_dec(nr_dentry_unused);
}
spin_unlock(&dentry->d_sb->s_dentry_lru_lock);
}
@@ -454,7 +465,8 @@ EXPORT_SYMBOL(d_drop);
* If ref is non-zero, then decrement the refcount too.
* Returns dentry requiring refcount drop, or NULL if we're done.
*/
-static inline struct dentry *dentry_kill(struct dentry *dentry, int ref)
+static inline struct dentry *
+dentry_kill(struct dentry *dentry, int ref, int unlock_on_failure)
__releases(dentry->d_lock)
{
struct inode *inode;
@@ -463,8 +475,10 @@ static inline struct dentry *dentry_kill(struct dentry *dentry, int ref)
inode = dentry->d_inode;
if (inode && !spin_trylock(&inode->i_lock)) {
relock:
- spin_unlock(&dentry->d_lock);
- cpu_relax();
+ if (unlock_on_failure) {
+ spin_unlock(&dentry->d_lock);
+ cpu_relax();
+ }
return dentry; /* try again with same dentry */
}
if (IS_ROOT(dentry))
@@ -551,7 +565,7 @@ repeat:
return;
kill_it:
- dentry = dentry_kill(dentry, 1);
+ dentry = dentry_kill(dentry, 1, 1);
if (dentry)
goto repeat;
}
@@ -750,12 +764,12 @@ EXPORT_SYMBOL(d_prune_aliases);
*
* This may fail if locks cannot be acquired no problem, just try again.
*/
-static void try_prune_one_dentry(struct dentry *dentry)
+static struct dentry * try_prune_one_dentry(struct dentry *dentry)
__releases(dentry->d_lock)
{
struct dentry *parent;
- parent = dentry_kill(dentry, 0);
+ parent = dentry_kill(dentry, 0, 0);
/*
* If dentry_kill returns NULL, we have nothing more to do.
* if it returns the same dentry, trylocks failed. In either
@@ -767,9 +781,9 @@ static void try_prune_one_dentry(struct dentry *dentry)
* fragmentation.
*/
if (!parent)
- return;
+ return NULL;
if (parent == dentry)
- return;
+ return dentry;
/* Prune ancestors. */
dentry = parent;
@@ -778,10 +792,11 @@ static void try_prune_one_dentry(struct dentry *dentry)
if (dentry->d_count > 1) {
dentry->d_count--;
spin_unlock(&dentry->d_lock);
- return;
+ return NULL;
}
- dentry = dentry_kill(dentry, 1);
+ dentry = dentry_kill(dentry, 1, 1);
}
+ return NULL;
}
static void shrink_dentry_list(struct list_head *list)
@@ -800,21 +815,31 @@ static void shrink_dentry_list(struct list_head *list)
}
/*
+ * The dispose list is isolated and dentries are not accounted
+ * to the LRU here, so we can simply remove it from the list
+ * here regardless of whether it is referenced or not.
+ */
+ list_del_init(&dentry->d_lru);
+ dentry->d_flags &= ~DCACHE_SHRINK_LIST;
+
+ /*
* We found an inuse dentry which was not removed from
- * the LRU because of laziness during lookup. Do not free
- * it - just keep it off the LRU list.
+ * the LRU because of laziness during lookup. Do not free it.
*/
if (dentry->d_count) {
- dentry_lru_del(dentry);
spin_unlock(&dentry->d_lock);
continue;
}
-
rcu_read_unlock();
- try_prune_one_dentry(dentry);
+ dentry = try_prune_one_dentry(dentry);
rcu_read_lock();
+ if (dentry) {
+ dentry->d_flags |= DCACHE_SHRINK_LIST;
+ list_add(&dentry->d_lru, list);
+ spin_unlock(&dentry->d_lock);
+ }
}
rcu_read_unlock();
}
@@ -855,8 +880,10 @@ relock:
list_move(&dentry->d_lru, &referenced);
spin_unlock(&dentry->d_lock);
} else {
- list_move_tail(&dentry->d_lru, &tmp);
+ list_move(&dentry->d_lru, &tmp);
dentry->d_flags |= DCACHE_SHRINK_LIST;
+ this_cpu_dec(nr_dentry_unused);
+ sb->s_nr_dentry_unused--;
spin_unlock(&dentry->d_lock);
if (!--count)
break;
@@ -870,6 +897,27 @@ relock:
shrink_dentry_list(&tmp);
}
+/*
+ * Mark all the dentries as on being the dispose list so we don't think they are
+ * still on the LRU if we try to kill them from ascending the parent chain in
+ * try_prune_one_dentry() rather than directly from the dispose list.
+ */
+static void
+shrink_dcache_list(
+ struct list_head *dispose)
+{
+ struct dentry *dentry;
+
+ rcu_read_lock();
+ list_for_each_entry_rcu(dentry, dispose, d_lru) {
+ spin_lock(&dentry->d_lock);
+ dentry->d_flags |= DCACHE_SHRINK_LIST;
+ spin_unlock(&dentry->d_lock);
+ }
+ rcu_read_unlock();
+ shrink_dentry_list(dispose);
+}
+
/**
* shrink_dcache_sb - shrink dcache for a superblock
* @sb: superblock
@@ -883,9 +931,17 @@ void shrink_dcache_sb(struct super_block *sb)
spin_lock(&sb->s_dentry_lru_lock);
while (!list_empty(&sb->s_dentry_lru)) {
+ /*
+ * account for removal here so we don't need to handle it later
+ * even though the dentry is no longer on the lru list.
+ */
list_splice_init(&sb->s_dentry_lru, &tmp);
+ this_cpu_sub(nr_dentry_unused, sb->s_nr_dentry_unused);
+ sb->s_nr_dentry_unused = 0;
spin_unlock(&sb->s_dentry_lru_lock);
- shrink_dentry_list(&tmp);
+
+ shrink_dcache_list(&tmp);
+
spin_lock(&sb->s_dentry_lru_lock);
}
spin_unlock(&sb->s_dentry_lru_lock);
--
1.8.1.4
next prev parent reply other threads:[~2013-05-30 10:35 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-05-30 10:35 [PATCH v9 00/35] kmemcg shrinkers Glauber Costa
2013-05-30 10:35 ` [PATCH v9 02/35] super: fix calculation of shrinkable objects for small numbers Glauber Costa
2013-05-30 10:36 ` [PATCH v9 20/35] drivers: convert shrinkers to new count/scan API Glauber Costa
2013-05-30 10:36 ` [PATCH v9 31/35] vmscan: take at least one pass with shrinkers Glauber Costa
[not found] ` <1369910181-20026-1-git-send-email-glommer-GEFAQzZX7r8dnm+yROfE0A@public.gmane.org>
2013-05-30 10:35 ` [PATCH v9 01/35] fs: bump inode and dentry counters to long Glauber Costa
2013-05-30 10:35 ` [PATCH v9 03/35] dcache: convert dentry_stat.nr_unused to per-cpu counters Glauber Costa
2013-05-30 10:35 ` [PATCH v9 04/35] dentry: move to per-sb LRU locks Glauber Costa
2013-05-30 10:35 ` Glauber Costa [this message]
2013-05-30 10:35 ` [PATCH v9 06/35] mm: new shrinker API Glauber Costa
2013-05-30 10:35 ` [PATCH v9 07/35] shrinker: convert superblock shrinkers to new API Glauber Costa
2013-05-30 10:35 ` [PATCH v9 08/35] list: add a new LRU list type Glauber Costa
2013-05-30 10:35 ` [PATCH v9 09/35] inode: convert inode lru list to generic lru list code Glauber Costa
2013-05-30 10:35 ` [PATCH v9 10/35] dcache: convert to use new lru list infrastructure Glauber Costa
2013-05-30 10:35 ` [PATCH v9 11/35] list_lru: per-node " Glauber Costa
2013-05-30 10:35 ` [PATCH v9 12/35] shrinker: add node awareness Glauber Costa
2013-05-30 10:35 ` [PATCH v9 13/35] vmscan: per-node deferred work Glauber Costa
2013-05-30 10:36 ` [PATCH v9 14/35] list_lru: per-node API Glauber Costa
2013-05-30 10:36 ` [PATCH v9 15/35] fs: convert inode and dentry shrinking to be node aware Glauber Costa
2013-05-30 10:36 ` [PATCH v9 16/35] xfs: convert buftarg LRU to generic code Glauber Costa
2013-05-30 10:36 ` [PATCH v9 17/35] xfs: rework buffer dispose list tracking Glauber Costa
2013-05-30 10:36 ` [PATCH v9 18/35] xfs: convert dquot cache lru to list_lru Glauber Costa
2013-05-30 10:36 ` [PATCH v9 19/35] fs: convert fs shrinkers to new scan/count API Glauber Costa
2013-05-30 10:36 ` [PATCH v9 21/35] i915: bail out earlier when shrinker cannot acquire mutex Glauber Costa
2013-05-30 10:36 ` [PATCH v9 22/35] shrinker: convert remaining shrinkers to count/scan API Glauber Costa
2013-05-30 10:36 ` [PATCH v9 23/35] hugepage: convert huge zero page shrinker to new shrinker API Glauber Costa
2013-05-30 10:36 ` [PATCH v9 24/35] shrinker: Kill old ->shrink API Glauber Costa
2013-05-30 10:36 ` [PATCH v9 25/35] vmscan: also shrink slab in memcg pressure Glauber Costa
2013-05-30 10:36 ` [PATCH v9 26/35] memcg,list_lru: duplicate LRUs upon kmemcg creation Glauber Costa
2013-05-30 10:36 ` [PATCH v9 27/35] lru: add an element to a memcg list Glauber Costa
2013-05-30 10:36 ` [PATCH v9 28/35] list_lru: per-memcg walks Glauber Costa
2013-05-30 10:36 ` [PATCH v9 29/35] memcg: per-memcg kmem shrinking Glauber Costa
2013-05-30 10:36 ` [PATCH v9 30/35] memcg: scan cache objects hierarchically Glauber Costa
2013-05-30 10:36 ` [PATCH v9 32/35] super: targeted memcg reclaim Glauber Costa
2013-05-30 10:36 ` [PATCH v9 33/35] memcg: move initialization to memcg creation Glauber Costa
2013-05-30 10:36 ` [PATCH v9 34/35] vmpressure: in-kernel notifications Glauber Costa
2013-05-30 10:36 ` [PATCH v9 35/35] memcg: reap dead memcgs upon global memory pressure Glauber Costa
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1369910181-20026-6-git-send-email-glommer@openvz.org \
--to=glommer-gefaqzzx7r8dnm+yrofe0a@public.gmane.org \
--cc=akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org \
--cc=cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
--cc=david-FqsqvQoI3Ljby3iVrkZq2A@public.gmane.org \
--cc=dchinner-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org \
--cc=gthelen-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org \
--cc=hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org \
--cc=hughd-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org \
--cc=kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org \
--cc=linux-fsdevel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
--cc=linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org \
--cc=mgorman-l3A5Bk7waGM@public.gmane.org \
--cc=mhocko-AlSwsSmVLrQ@public.gmane.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).