linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jan Kara <jack@suse.cz>
To: linux-fsdevel@vger.kernel.org
Cc: LKML <linux-kernel@vger.kernel.org>,
	hare@suse.de, Andrew Morton <akpm@linux-foundation.org>,
	Al Viro <viro@ZenIV.linux.org.uk>,
	Christoph Hellwig <hch@infradead.org>, Jan Kara <jack@suse.cz>
Subject: [PATCH] vfs: Avoid IPI storm due to bh LRU invalidation
Date: Mon,  6 Feb 2012 14:55:31 +0100	[thread overview]
Message-ID: <1328536531-19034-1-git-send-email-jack@suse.cz> (raw)

When discovery of lots of disks happen in parallel, we call
invalidate_bh_lrus() once for each disk from partitioning code resulting in a
storm of IPIs and causing a softlockup detection to fire (it takes several
*minutes* for a machine to execute all the invalidate_bh_lrus() calls).

Fix the issue by allowing only single invalidation to run using a mutex and let
waiters for mutex figure out whether someone invalidated LRUs for them while
they were waiting.

Signed-off-by: Jan Kara <jack@suse.cz>
---
 fs/buffer.c |   23 ++++++++++++++++++++++-
 1 files changed, 22 insertions(+), 1 deletions(-)

  I feel this is slightly hacky approach but it works. If someone has better
idea, please speak up.

diff --git a/fs/buffer.c b/fs/buffer.c
index 1a30db7..56b0d2b 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -1384,10 +1384,31 @@ static void invalidate_bh_lru(void *arg)
 	}
 	put_cpu_var(bh_lrus);
 }
-	
+
+/*
+ * Invalidate all buffers in LRUs. Since we have to signal all CPUs to
+ * invalidate their per-cpu local LRU lists this is rather expensive operation.
+ * So we optimize the case of several parallel calls to invalidate_bh_lrus()
+ * which happens from partitioning code when lots of disks appear in the
+ * system during boot.
+ */
 void invalidate_bh_lrus(void)
 {
+	static DEFINE_MUTEX(bh_invalidate_mutex);
+	static long bh_invalidate_sequence;
+
+	long my_bh_invalidate_sequence = bh_invalidate_sequence;
+
+	mutex_lock(&bh_invalidate_mutex);
+	/* Someone did bh invalidation while we were sleeping? */
+	if (my_bh_invalidate_sequence != bh_invalidate_sequence)
+		goto out;
+	bh_invalidate_sequence++;
+	/* Inc of bh_invalidate_sequence must happen before we invalidate bhs */
+	smp_wmb();
 	on_each_cpu(invalidate_bh_lru, NULL, 1);
+out:
+	mutex_unlock(&bh_invalidate_mutex);
 }
 EXPORT_SYMBOL_GPL(invalidate_bh_lrus);
 
-- 
1.7.1

             reply	other threads:[~2012-02-06 13:55 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-02-06 13:55 Jan Kara [this message]
2012-02-06 15:42 ` [PATCH] vfs: Avoid IPI storm due to bh LRU invalidation Srivatsa S. Bhat
2012-02-06 15:51   ` Hannes Reinecke
2012-02-06 16:47   ` Jan Kara
2012-02-06 21:17     ` Andrew Morton
2012-02-06 22:25       ` Jan Kara
2012-02-07 16:25         ` Gilad Ben-Yossef
2012-02-07 18:29           ` Jan Kara
2012-02-08  7:09             ` Gilad Ben-Yossef

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1328536531-19034-1-git-send-email-jack@suse.cz \
    --to=jack@suse.cz \
    --cc=akpm@linux-foundation.org \
    --cc=hare@suse.de \
    --cc=hch@infradead.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=viro@ZenIV.linux.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).