linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Glauber Costa <glommer@openvz.org>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: <linux-fsdevel@vger.kernel.org>, Mel Gorman <mgorman@suse.de>,
	Dave Chinner <david@fromorbit.com>, <linux-mm@kvack.org>,
	<cgroups@vger.kernel.org>, <kamezawa.hiroyu@jp.fujitsu.com>,
	Michal Hocko <mhocko@suse.cz>,
	Johannes Weiner <hannes@cmpxchg.org>,
	hughd@google.com, Greg Thelen <gthelen@google.com>,
	Glauber Costa <glommer@openvz.org>,
	"Theodore Ts'o" <tytso@mit.edu>,
	Al Viro <viro@zeniv.linux.org.uk>
Subject: [PATCH v9 02/35] super: fix calculation of shrinkable objects for small numbers
Date: Thu, 30 May 2013 14:35:48 +0400	[thread overview]
Message-ID: <1369910181-20026-3-git-send-email-glommer@openvz.org> (raw)
In-Reply-To: <1369910181-20026-1-git-send-email-glommer@openvz.org>

The sysctl knob sysctl_vfs_cache_pressure is used to determine which
percentage of the shrinkable objects in our cache we should actively try
to shrink.

It works great in situations in which we have many objects (at least
more than 100), because the aproximation errors will be negligible. But
if this is not the case, specially when total_objects < 100, we may end
up concluding that we have no objects at all (total / 100 = 0,  if total
< 100).

This is certainly not the biggest killer in the world, but may matter in
very low kernel memory situations.

[ v2: fix it for all occurrences of sysctl_vfs_cache_pressure ]

Signed-off-by: Glauber Costa <glommer@openvz.org>
Reviewed-by: Carlos Maiolino <cmaiolino@redhat.com>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Acked-by: Mel Gorman <mgorman@suse.de>
CC: Dave Chinner <david@fromorbit.com>
CC: "Theodore Ts'o" <tytso@mit.edu>
CC: Al Viro <viro@zeniv.linux.org.uk>
---
 fs/gfs2/glock.c        |  2 +-
 fs/gfs2/quota.c        |  2 +-
 fs/mbcache.c           |  2 +-
 fs/nfs/dir.c           |  2 +-
 fs/quota/dquot.c       |  5 ++---
 fs/super.c             | 14 +++++++-------
 fs/xfs/xfs_qm.c        |  2 +-
 include/linux/dcache.h |  4 ++++
 8 files changed, 18 insertions(+), 15 deletions(-)

diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
index 9435384..3bd2748 100644
--- a/fs/gfs2/glock.c
+++ b/fs/gfs2/glock.c
@@ -1463,7 +1463,7 @@ static int gfs2_shrink_glock_memory(struct shrinker *shrink,
 		gfs2_scan_glock_lru(sc->nr_to_scan);
 	}
 
-	return (atomic_read(&lru_count) / 100) * sysctl_vfs_cache_pressure;
+	return vfs_pressure_ratio(atomic_read(&lru_count));
 }
 
 static struct shrinker glock_shrinker = {
diff --git a/fs/gfs2/quota.c b/fs/gfs2/quota.c
index c253b13..f9f4077 100644
--- a/fs/gfs2/quota.c
+++ b/fs/gfs2/quota.c
@@ -114,7 +114,7 @@ int gfs2_shrink_qd_memory(struct shrinker *shrink, struct shrink_control *sc)
 	spin_unlock(&qd_lru_lock);
 
 out:
-	return (atomic_read(&qd_lru_count) * sysctl_vfs_cache_pressure) / 100;
+	return vfs_pressure_ratio(atomic_read(&qd_lru_count));
 }
 
 static u64 qd2index(struct gfs2_quota_data *qd)
diff --git a/fs/mbcache.c b/fs/mbcache.c
index 8c32ef3..5eb0476 100644
--- a/fs/mbcache.c
+++ b/fs/mbcache.c
@@ -189,7 +189,7 @@ mb_cache_shrink_fn(struct shrinker *shrink, struct shrink_control *sc)
 	list_for_each_entry_safe(entry, tmp, &free_list, e_lru_list) {
 		__mb_cache_entry_forget(entry, gfp_mask);
 	}
-	return (count / 100) * sysctl_vfs_cache_pressure;
+	return vfs_pressure_ratio(count);
 }
 
 
diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c
index c662ff6..a6a3d05 100644
--- a/fs/nfs/dir.c
+++ b/fs/nfs/dir.c
@@ -1978,7 +1978,7 @@ remove_lru_entry:
 	}
 	spin_unlock(&nfs_access_lru_lock);
 	nfs_access_free_list(&head);
-	return (atomic_long_read(&nfs_access_nr_entries) / 100) * sysctl_vfs_cache_pressure;
+	return vfs_pressure_ratio(atomic_long_read(&nfs_access_nr_entries));
 }
 
 static void __nfs_access_zap_cache(struct nfs_inode *nfsi, struct list_head *head)
diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
index 3e64169..762b09c 100644
--- a/fs/quota/dquot.c
+++ b/fs/quota/dquot.c
@@ -719,9 +719,8 @@ static int shrink_dqcache_memory(struct shrinker *shrink,
 		prune_dqcache(nr);
 		spin_unlock(&dq_list_lock);
 	}
-	return ((unsigned)
-		percpu_counter_read_positive(&dqstats.counter[DQST_FREE_DQUOTS])
-		/100) * sysctl_vfs_cache_pressure;
+	return vfs_pressure_ratio(
+	percpu_counter_read_positive(&dqstats.counter[DQST_FREE_DQUOTS]));
 }
 
 static struct shrinker dqcache_shrinker = {
diff --git a/fs/super.c b/fs/super.c
index 7465d43..2a37fd6 100644
--- a/fs/super.c
+++ b/fs/super.c
@@ -82,13 +82,13 @@ static int prune_super(struct shrinker *shrink, struct shrink_control *sc)
 		int	inodes;
 
 		/* proportion the scan between the caches */
-		dentries = (sc->nr_to_scan * sb->s_nr_dentry_unused) /
-							total_objects;
-		inodes = (sc->nr_to_scan * sb->s_nr_inodes_unused) /
-							total_objects;
+		dentries = mult_frac(sc->nr_to_scan, sb->s_nr_dentry_unused,
+							total_objects);
+		inodes = mult_frac(sc->nr_to_scan, sb->s_nr_inodes_unused,
+							total_objects);
 		if (fs_objects)
-			fs_objects = (sc->nr_to_scan * fs_objects) /
-							total_objects;
+			fs_objects = mult_frac(sc->nr_to_scan, fs_objects,
+							total_objects);
 		/*
 		 * prune the dcache first as the icache is pinned by it, then
 		 * prune the icache, followed by the filesystem specific caches
@@ -104,7 +104,7 @@ static int prune_super(struct shrinker *shrink, struct shrink_control *sc)
 				sb->s_nr_inodes_unused + fs_objects;
 	}
 
-	total_objects = (total_objects / 100) * sysctl_vfs_cache_pressure;
+	total_objects = vfs_pressure_ratio(total_objects);
 	drop_super(sb);
 	return total_objects;
 }
diff --git a/fs/xfs/xfs_qm.c b/fs/xfs/xfs_qm.c
index f41702b..7ade175 100644
--- a/fs/xfs/xfs_qm.c
+++ b/fs/xfs/xfs_qm.c
@@ -1585,7 +1585,7 @@ xfs_qm_shake(
 	}
 
 out:
-	return (qi->qi_lru_count / 100) * sysctl_vfs_cache_pressure;
+	return vfs_pressure_ratio(qi->qi_lru_count);
 }
 
 /*
diff --git a/include/linux/dcache.h b/include/linux/dcache.h
index 1a82bdb..bd08285 100644
--- a/include/linux/dcache.h
+++ b/include/linux/dcache.h
@@ -411,4 +411,8 @@ static inline bool d_mountpoint(struct dentry *dentry)
 
 extern int sysctl_vfs_cache_pressure;
 
+static inline unsigned long vfs_pressure_ratio(unsigned long val)
+{
+	return mult_frac(val, sysctl_vfs_cache_pressure, 100);
+}
 #endif	/* __LINUX_DCACHE_H */
-- 
1.8.1.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2013-05-30 10:35 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-05-30 10:35 [PATCH v9 00/35] kmemcg shrinkers Glauber Costa
2013-05-30 10:35 ` Glauber Costa [this message]
     [not found] ` <1369910181-20026-1-git-send-email-glommer-GEFAQzZX7r8dnm+yROfE0A@public.gmane.org>
2013-05-30 10:35   ` [PATCH v9 01/35] fs: bump inode and dentry counters to long Glauber Costa
2013-05-30 10:35   ` [PATCH v9 03/35] dcache: convert dentry_stat.nr_unused to per-cpu counters Glauber Costa
2013-05-30 10:35   ` [PATCH v9 04/35] dentry: move to per-sb LRU locks Glauber Costa
2013-05-30 10:35   ` [PATCH v9 05/35] dcache: remove dentries from LRU before putting on dispose list Glauber Costa
2013-05-30 10:35   ` [PATCH v9 06/35] mm: new shrinker API Glauber Costa
2013-05-30 10:35   ` [PATCH v9 07/35] shrinker: convert superblock shrinkers to new API Glauber Costa
2013-05-30 10:35   ` [PATCH v9 08/35] list: add a new LRU list type Glauber Costa
2013-05-30 10:35   ` [PATCH v9 09/35] inode: convert inode lru list to generic lru list code Glauber Costa
2013-05-30 10:35   ` [PATCH v9 10/35] dcache: convert to use new lru list infrastructure Glauber Costa
2013-05-30 10:35   ` [PATCH v9 11/35] list_lru: per-node " Glauber Costa
2013-05-30 10:35   ` [PATCH v9 12/35] shrinker: add node awareness Glauber Costa
2013-05-30 10:35   ` [PATCH v9 13/35] vmscan: per-node deferred work Glauber Costa
2013-05-30 10:36   ` [PATCH v9 14/35] list_lru: per-node API Glauber Costa
2013-05-30 10:36   ` [PATCH v9 15/35] fs: convert inode and dentry shrinking to be node aware Glauber Costa
2013-05-30 10:36   ` [PATCH v9 16/35] xfs: convert buftarg LRU to generic code Glauber Costa
2013-05-30 10:36   ` [PATCH v9 17/35] xfs: rework buffer dispose list tracking Glauber Costa
2013-05-30 10:36   ` [PATCH v9 18/35] xfs: convert dquot cache lru to list_lru Glauber Costa
2013-05-30 10:36   ` [PATCH v9 19/35] fs: convert fs shrinkers to new scan/count API Glauber Costa
2013-05-30 10:36   ` [PATCH v9 21/35] i915: bail out earlier when shrinker cannot acquire mutex Glauber Costa
2013-05-30 10:36   ` [PATCH v9 22/35] shrinker: convert remaining shrinkers to count/scan API Glauber Costa
2013-05-30 10:36   ` [PATCH v9 23/35] hugepage: convert huge zero page shrinker to new shrinker API Glauber Costa
2013-05-30 10:36   ` [PATCH v9 24/35] shrinker: Kill old ->shrink API Glauber Costa
2013-05-30 10:36   ` [PATCH v9 25/35] vmscan: also shrink slab in memcg pressure Glauber Costa
2013-05-30 10:36   ` [PATCH v9 26/35] memcg,list_lru: duplicate LRUs upon kmemcg creation Glauber Costa
2013-05-30 10:36   ` [PATCH v9 27/35] lru: add an element to a memcg list Glauber Costa
2013-05-30 10:36   ` [PATCH v9 28/35] list_lru: per-memcg walks Glauber Costa
2013-05-30 10:36   ` [PATCH v9 29/35] memcg: per-memcg kmem shrinking Glauber Costa
2013-05-30 10:36   ` [PATCH v9 30/35] memcg: scan cache objects hierarchically Glauber Costa
2013-05-30 10:36   ` [PATCH v9 32/35] super: targeted memcg reclaim Glauber Costa
2013-05-30 10:36   ` [PATCH v9 33/35] memcg: move initialization to memcg creation Glauber Costa
2013-05-30 10:36   ` [PATCH v9 34/35] vmpressure: in-kernel notifications Glauber Costa
2013-05-30 10:36   ` [PATCH v9 35/35] memcg: reap dead memcgs upon global memory pressure Glauber Costa
2013-05-30 10:36 ` [PATCH v9 20/35] drivers: convert shrinkers to new count/scan API Glauber Costa
2013-05-30 10:36 ` [PATCH v9 31/35] vmscan: take at least one pass with shrinkers Glauber Costa

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1369910181-20026-3-git-send-email-glommer@openvz.org \
    --to=glommer@openvz.org \
    --cc=akpm@linux-foundation.org \
    --cc=cgroups@vger.kernel.org \
    --cc=david@fromorbit.com \
    --cc=gthelen@google.com \
    --cc=hannes@cmpxchg.org \
    --cc=hughd@google.com \
    --cc=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=mhocko@suse.cz \
    --cc=tytso@mit.edu \
    --cc=viro@zeniv.linux.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).