linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* re: [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0
@ 2010-07-22  0:54 Wang Sheng-Hui
  2010-07-22  1:06 ` shenghui
  0 siblings, 1 reply; 14+ messages in thread
From: Wang Sheng-Hui @ 2010-07-22  0:54 UTC (permalink / raw)
  To: Eric Sandeen, agruen, hch, linux-ext4, linux-kernel,
	linux-fsdevel, linux-mm

Sorry, missed that. Regerated and passed checkpatch.pl check. 
Please check it.


Signed-off-by: Wang Sheng-Hui <crosslonelyover@gmail.com>
---
 fs/mbcache.c |   23 ++++++++++++-----------
 1 files changed, 12 insertions(+), 11 deletions(-)

diff --git a/fs/mbcache.c b/fs/mbcache.c
index ec88ff3..603170e 100644
--- a/fs/mbcache.c
+++ b/fs/mbcache.c
@@ -201,21 +201,14 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
 {
 	LIST_HEAD(free_list);
 	struct list_head *l, *ltmp;
+	struct mb_cache *cache;
 	int count = 0;
 
-	spin_lock(&mb_cache_spinlock);
-	list_for_each(l, &mb_cache_list) {
-		struct mb_cache *cache =
-			list_entry(l, struct mb_cache, c_cache_list);
-		mb_debug("cache %s (%d)", cache->c_name,
-			  atomic_read(&cache->c_entry_count));
-		count += atomic_read(&cache->c_entry_count);
-	}
 	mb_debug("trying to free %d entries", nr_to_scan);
-	if (nr_to_scan == 0) {
-		spin_unlock(&mb_cache_spinlock);
+	if (nr_to_scan == 0)
 		goto out;
-	}
+
+	spin_lock(&mb_cache_spinlock);
 	while (nr_to_scan-- && !list_empty(&mb_cache_lru_list)) {
 		struct mb_cache_entry *ce =
 			list_entry(mb_cache_lru_list.next,
@@ -229,6 +222,14 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
 						   e_lru_list), gfp_mask);
 	}
 out:
+	spin_lock(&mb_cache_spinlock);
+	list_for_each_entry(cache, &mb_cache_list, c_cache_list) {
+		mb_debug("cache %s (%d)", cache->c_name,
+			  atomic_read(&cache->c_entry_count));
+		count += atomic_read(&cache->c_entry_count);
+	}
+	spin_unlock(&mb_cache_spinlock);
+
 	return (count / 100) * sysctl_vfs_cache_pressure;
 }
 
-- 
1.6.3.3

^ permalink raw reply related	[flat|nested] 14+ messages in thread
* [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0
@ 2010-07-21 10:53 Wang Sheng-Hui
  2010-07-21 14:00 ` Eric Sandeen
  0 siblings, 1 reply; 14+ messages in thread
From: Wang Sheng-Hui @ 2010-07-21 10:53 UTC (permalink / raw)
  To: sandeen, agruen, hch, linux-ext4, linux-fsdevel, linux-kernel,
	linux-mm, kerne

Sorry. regerated the patch, please check it.
I wrapped most code in single pair of spinlock ops for 2 reasons:
1) get spinlock 2 times seems time consuming
2) use single pair of spinlock ops can keep "count"
   consistent for the shrink operation. 2 pairs may
   get some new ces created by other processes. 



Signed-off-by: Wang Sheng-Hui <crosslonelyover@gmail.com>
---
 fs/mbcache.c |   24 ++++++++++++------------
 1 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/fs/mbcache.c b/fs/mbcache.c
index ec88ff3..ee57aa3 100644
--- a/fs/mbcache.c
+++ b/fs/mbcache.c
@@ -201,21 +201,15 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
 {
 	LIST_HEAD(free_list);
 	struct list_head *l, *ltmp;
+	struct mb_cache *cache;
 	int count = 0;
 
-	spin_lock(&mb_cache_spinlock);
-	list_for_each(l, &mb_cache_list) {
-		struct mb_cache *cache =
-			list_entry(l, struct mb_cache, c_cache_list);
-		mb_debug("cache %s (%d)", cache->c_name,
-			  atomic_read(&cache->c_entry_count));
-		count += atomic_read(&cache->c_entry_count);
-	}
 	mb_debug("trying to free %d entries", nr_to_scan);
-	if (nr_to_scan == 0) {
-		spin_unlock(&mb_cache_spinlock);
+
+	spin_lock(&mb_cache_spinlock);
+	if (nr_to_scan == 0)
 		goto out;
-	}
+
 	while (nr_to_scan-- && !list_empty(&mb_cache_lru_list)) {
 		struct mb_cache_entry *ce =
 			list_entry(mb_cache_lru_list.next,
@@ -223,12 +217,18 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
 		list_move_tail(&ce->e_lru_list, &free_list);
 		__mb_cache_entry_unhash(ce);
 	}
-	spin_unlock(&mb_cache_spinlock);
 	list_for_each_safe(l, ltmp, &free_list) {
 		__mb_cache_entry_forget(list_entry(l, struct mb_cache_entry,
 						   e_lru_list), gfp_mask);
 	}
 out:
+	list_for_each_entry(cache, &mb_cache_list, c_cache_list) {
+		mb_debug("cache %s (%d)", cache->c_name,
+			  atomic_read(&cache->c_entry_count));
+		count += atomic_read(&cache->c_entry_count);
+	}
+	spin_unlock(&mb_cache_spinlock);
+
 	return (count / 100) * sysctl_vfs_cache_pressure;
 }
 
-- 
1.6.3.3


^ permalink raw reply related	[flat|nested] 14+ messages in thread
* [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0
@ 2010-07-18  1:01 Wang Sheng-Hui
  2010-07-18  4:06 ` Eric Sandeen
  0 siblings, 1 reply; 14+ messages in thread
From: Wang Sheng-Hui @ 2010-07-18  1:01 UTC (permalink / raw)
  To: linux-fsdevel, viro, linux-mm, linux-ext4, kernel-janitors,
	a.gruenbacher

Hi,

The comment for struct shrinker in include/linux/mm.h says
"shrink...It should return the number of objects which remain in the
cache."
Please notice the word "remain".

In fs/mbcache.h, mb_cache_shrink_fn is used as the shrink function:
 	static struct shrinker mb_cache_shrinker = {	
 		.shrink = mb_cache_shrink_fn,
 		.seeks = DEFAULT_SEEKS,
 	};
In mb_cache_shrink_fn, the return value for nr_to_scan > 0 is the
number of mb_cache_entry before shrink operation. It may because the
memory usage for mbcache is low, so the effect is not so obvious.
I think we'd better fix the return value issue.

Following patch is against 2.6.35-rc5. Please check it.

Signed-off-by: Wang Sheng-Hui <crosslonelyover@gmail.com>
---
 fs/mbcache.c |   10 ++++++++++
 1 files changed, 10 insertions(+), 0 deletions(-)

diff --git a/fs/mbcache.c b/fs/mbcache.c
index ec88ff3..412e7cc 100644
--- a/fs/mbcache.c
+++ b/fs/mbcache.c
@@ -228,6 +228,16 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
 		__mb_cache_entry_forget(list_entry(l, struct mb_cache_entry,
 						   e_lru_list), gfp_mask);
 	}
+	spin_lock(&mb_cache_spinlock);
+	count = 0;
+	list_for_each(l, &mb_cache_list) {
+		struct mb_cache *cache =
+			list_entry(l, struct mb_cache, c_cache_list);
+		mb_debug("cache %s (%d)", cache->c_name,
+			  atomic_read(&cache->c_entry_count));
+		count += atomic_read(&cache->c_entry_count);
+	}
+	spin_unlock(&mb_cache_spinlock);
 out:
 	return (count / 100) * sysctl_vfs_cache_pressure;
 }
-- 
1.7.1.1





-- 
Thanks and Regards,
shenghui

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2010-07-22  1:06 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-07-22  0:54 [PATCH] fix return value for mb_cache_shrink_fn when nr_to_scan > 0 Wang Sheng-Hui
2010-07-22  1:06 ` shenghui
  -- strict thread matches above, loose matches on Subject: below --
2010-07-21 10:53 Wang Sheng-Hui
2010-07-21 14:00 ` Eric Sandeen
2010-07-18  1:01 Wang Sheng-Hui
2010-07-18  4:06 ` Eric Sandeen
2010-07-18  6:01   ` Christoph Hellwig
2010-07-18  6:36     ` Wang Sheng-Hui
2010-07-19 18:39       ` Andreas Gruenbacher
2010-07-20  1:02         ` shenghui
2010-07-20  1:04           ` shenghui
2010-07-20 15:13         ` Eric Sandeen
2010-07-20 16:34           ` Andreas Gruenbacher
2010-07-19 18:40       ` Andreas Gruenbacher

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).