From: Steven Whitehouse <swhiteho@redhat.com>
To: cluster-devel.redhat.com
Subject: [Cluster-devel] [PATCH 1/7] GFS2: Shrink the shrinker
Date: Thu, 30 Jul 2009 14:45:18 +0100 [thread overview]
Message-ID: <1248961524-30913-2-git-send-email-swhiteho@redhat.com> (raw)
In-Reply-To: <1248961524-30913-1-git-send-email-swhiteho@redhat.com>
This patch removes some of the special cases that the shrinker
was trying to deal with. As a result we leave fewer items on
the list and none at all which cannot be demoted. This makes
the list scanning more efficient and solves some issues seen
with large numbers of inodes.
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
---
fs/gfs2/glock.c | 23 +++++------------------
1 files changed, 5 insertions(+), 18 deletions(-)
diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
index 297421c..fdb796c 100644
--- a/fs/gfs2/glock.c
+++ b/fs/gfs2/glock.c
@@ -1300,7 +1300,6 @@ static int gfs2_shrink_glock_memory(int nr, gfp_t gfp_mask)
struct gfs2_glock *gl;
int may_demote;
int nr_skipped = 0;
- int got_ref = 0;
LIST_HEAD(skipped);
if (nr == 0)
@@ -1318,7 +1317,6 @@ static int gfs2_shrink_glock_memory(int nr, gfp_t gfp_mask)
/* Test for being demotable */
if (!test_and_set_bit(GLF_LOCK, &gl->gl_flags)) {
gfs2_glock_hold(gl);
- got_ref = 1;
spin_unlock(&lru_lock);
spin_lock(&gl->gl_spin);
may_demote = demote_ok(gl);
@@ -1327,25 +1325,14 @@ static int gfs2_shrink_glock_memory(int nr, gfp_t gfp_mask)
if (may_demote) {
handle_callback(gl, LM_ST_UNLOCKED, 0);
nr--;
- if (queue_delayed_work(glock_workqueue, &gl->gl_work, 0) == 0)
- gfs2_glock_put(gl);
- got_ref = 0;
}
+ if (queue_delayed_work(glock_workqueue, &gl->gl_work, 0) == 0)
+ gfs2_glock_put(gl);
spin_lock(&lru_lock);
- if (may_demote)
- continue;
- }
- if (list_empty(&gl->gl_lru) &&
- (atomic_read(&gl->gl_ref) <= (2 + got_ref))) {
- nr_skipped++;
- list_add(&gl->gl_lru, &skipped);
- }
- if (got_ref) {
- spin_unlock(&lru_lock);
- gfs2_glock_put(gl);
- spin_lock(&lru_lock);
- got_ref = 0;
+ continue;
}
+ nr_skipped++;
+ list_add(&gl->gl_lru, &skipped);
}
list_splice(&skipped, &lru_list);
atomic_add(nr_skipped, &lru_count);
--
1.6.2.2
next prev parent reply other threads:[~2009-07-30 13:45 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-07-30 13:45 [Cluster-devel] GFS2: Pre-pull patch posting (fixes) Steven Whitehouse
2009-07-30 13:02 ` [Cluster-devel] GFS2: Pull request (fixes) Steven Whitehouse
2009-07-30 13:45 ` Steven Whitehouse [this message]
2009-07-30 13:45 ` [Cluster-devel] [PATCH 2/7] GFS2: keep statfs info in sync on grows Steven Whitehouse
2009-07-30 13:45 ` [Cluster-devel] [PATCH 3/7] GFS2: Fix panic in glock memory shrinker Steven Whitehouse
2009-07-30 13:45 ` [Cluster-devel] [PATCH 4/7] GFS2: Don't try and dealloc own inode Steven Whitehouse
2009-07-30 13:45 ` [Cluster-devel] [PATCH 5/7] GFS2: Don't put unlikely reclaim candidates on the reclaim list Steven Whitehouse
2009-07-30 13:45 ` [Cluster-devel] [PATCH 6/7] GFS2: Fix incorrent statfs consistency check Steven Whitehouse
2009-07-30 13:45 ` [Cluster-devel] [PATCH 7/7] GFS2: remove dcache entries for remote deleted inodes Steven Whitehouse
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1248961524-30913-2-git-send-email-swhiteho@redhat.com \
--to=swhiteho@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).