From: Eric Sandeen <sandeen@redhat.com>
To: xfs-oss <xfs@oss.sgi.com>
Cc: Tomek Kruszona <bloodyscarion@gmail.com>,
Riku Paananen <riku.paananen@helsinki.fi>
Subject: [PATCH] libxfs: increase hash chain depth when we run out of slots
Date: Thu, 17 Sep 2009 11:06:16 -0500 [thread overview]
Message-ID: <4AB25E78.8050001@redhat.com> (raw)
A couple people reported xfs_repair hangs after
"Traversing filesystem ..." in xfs_repair. This happens
when all slots in the cache are full and referenced, and the
loop in cache_node_get() which tries to shake unused entries
fails to find any - it just keeps upping the priority and goes
forever.
This can be worked around by restarting xfs_repair with
-P and/or "-o bhash=<largersize>" for older xfs_repair.
I started down the path of increasing the number of hash buckets
on the fly, but Barry suggested simply increasing the max allowed
depth which is much simpler (thanks!)
Resizing the hash lengths does mean that cache_report ends up with
most things in the "greater-than" category:
...
Hash buckets with 23 entries 3 ( 3%)
Hash buckets with 24 entries 3 ( 3%)
Hash buckets with >24 entries 50 ( 85%)
but I think I'll save that fix for another patch unless there's
real concern right now.
I tested this on the metadump image provided by Tomek.
Signed-off-by: Eric Sandeen <sandeen@sandeen.net>
Reported-by: Tomek Kruszona <bloodyscarion@gmail.com>
Reported-by: Riku Paananen <riku.paananen@helsinki.fi>
---
diff --git a/libxfs/cache.c b/libxfs/cache.c
index 48f91d7..56b24e7 100644
--- a/libxfs/cache.c
+++ b/libxfs/cache.c
@@ -83,6 +83,18 @@ cache_init(
}
void
+cache_expand(
+ struct cache * cache)
+{
+ pthread_mutex_lock(&cache->c_mutex);
+#ifdef CACHE_DEBUG
+ fprintf(stderr, "doubling cache size to %d\n", 2 * cache->c_maxcount);
+#endif
+ cache->c_maxcount *= 2;
+ pthread_mutex_unlock(&cache->c_mutex);
+}
+
+void
cache_walk(
struct cache * cache,
cache_walk_t visit)
@@ -344,6 +356,15 @@ cache_node_get(
if (node)
break;
priority = cache_shake(cache, priority, 0);
+ /*
+ * We start at 0; if we free CACHE_SHAKE_COUNT we get
+ * back the same priority, if not we get back priority+1.
+ * If we exceed CACHE_MAX_PRIORITY all slots are full; grow it.
+ */
+ if (priority > CACHE_MAX_PRIORITY) {
+ priority = 0;
+ cache_expand(cache);
+ }
}
node->cn_hashidx = hashidx;
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next reply other threads:[~2009-09-17 16:05 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-09-17 16:06 Eric Sandeen [this message]
2009-09-17 18:09 ` [PATCH] libxfs: increase hash chain depth when we run out of slots Christoph Hellwig
2009-09-17 19:02 ` Eric Sandeen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4AB25E78.8050001@redhat.com \
--to=sandeen@redhat.com \
--cc=bloodyscarion@gmail.com \
--cc=riku.paananen@helsinki.fi \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox