linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Minfei Huang <mhuang@redhat.com>
To: viro@zeniv.linux.org.uk
Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org,
	mhuang@redhat.com, Minfei Huang <mnfhuang@gmail.com>
Subject: [PATCH] fs/buffer: simplify the code flow of LRU management algorithm
Date: Thu, 10 Sep 2015 16:09:39 +0800	[thread overview]
Message-ID: <1441872579-31595-1-git-send-email-mhuang@redhat.com> (raw)

From: Minfei Huang <mnfhuang@gmail.com>

There is a buffer_head lru list cache in local cpu to accelerate the
speed. The LRU management algorithm is simple enough in
bh_lru_install().

There are three situtaions we should deal with.
1) All/part of the lru cache is NULL.
2) The new buffer_head hitts the lru cache.
3) The new buffer_head does hit the lru cache.

We put the new buffer_head at the head of lru cache, then copy the
buffer_head from the original lru cache, and drop the spare.

Signed-off-by: Minfei Huang <mnfhuang@gmail.com>
---
 fs/buffer.c | 32 ++++++++++++++++++++------------
 1 file changed, 20 insertions(+), 12 deletions(-)

diff --git a/fs/buffer.c b/fs/buffer.c
index 1cf7a53..2139574 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -1287,8 +1287,6 @@ static inline void check_irqs_on(void)
  */
 static void bh_lru_install(struct buffer_head *bh)
 {
-	struct buffer_head *evictee = NULL;
-
 	check_irqs_on();
 	bh_lru_lock();
 	if (__this_cpu_read(bh_lrus.bhs[0]) != bh) {
@@ -1302,25 +1300,35 @@ static void bh_lru_install(struct buffer_head *bh)
 			struct buffer_head *bh2 =
 				__this_cpu_read(bh_lrus.bhs[in]);
 
-			if (bh2 == bh) {
+			if (bh2 == NULL) {
+				/* Rest value in bh_lrus.bhs always is NULL */
+				break;
+			} else if (bh2 == bh) {
 				__brelse(bh2);
 			} else {
-				if (out >= BH_LRU_SIZE) {
-					BUG_ON(evictee != NULL);
-					evictee = bh2;
+				if (out == BH_LRU_SIZE) {
+					/*
+					 * this condition will be happened,
+					 * only if none of entry in
+					 * bh_lrus.bhs hits the new bh,
+					 * so the last bh should be released.
+					 */
+					BUG_ON(in != BH_LRU_SIZE - 1);
+					__brelse(bh2);
+					break;
 				} else {
 					bhs[out++] = bh2;
 				}
 			}
 		}
-		while (out < BH_LRU_SIZE)
-			bhs[out++] = NULL;
-		memcpy(this_cpu_ptr(&bh_lrus.bhs), bhs, sizeof(bhs));
+		/*
+		 * it is fine that the value out may be smaller than
+		 * BH_LRU_SIZE. The rest of the value in bh_lrus.bhs is NULL.
+		 */
+		memcpy(this_cpu_ptr(&bh_lrus.bhs), bhs,
+				sizeof(struct buffer_head *) * out);
 	}
 	bh_lru_unlock();
-
-	if (evictee)
-		__brelse(evictee);
 }
 
 /*
-- 
2.1.0


             reply	other threads:[~2015-09-10  8:04 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-09-10  8:09 Minfei Huang [this message]
2015-09-28  5:36 ` [PATCH] fs/buffer: simplify the code flow of LRU management algorithm Minfei Huang
2015-09-28  6:52   ` yalin wang
2015-10-09 16:18     ` Minfei Huang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1441872579-31595-1-git-send-email-mhuang@redhat.com \
    --to=mhuang@redhat.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mnfhuang@gmail.com \
    --cc=viro@zeniv.linux.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).