public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Eric Sandeen <sandeen@sandeen.net>
To: David Chinner <dgc@sgi.com>
Cc: Peter Zijlstra <peterz@infradead.org>,
	linux-kernel Mailing List <linux-kernel@vger.kernel.org>,
	xfs-oss <xfs@oss.sgi.com>, Ingo Molnar <mingo@elte.hu>
Subject: Re: [PATCH] Increase lockdep MAX_LOCK_DEPTH
Date: Fri, 31 Aug 2007 09:33:30 -0500	[thread overview]
Message-ID: <46D826BA.1060705@sandeen.net> (raw)
In-Reply-To: <20070831135042.GD422459@sgi.com>

David Chinner wrote:
> On Fri, Aug 31, 2007 at 08:39:49AM +0200, Peter Zijlstra wrote:
>> On Thu, 2007-08-30 at 23:43 -0500, Eric Sandeen wrote:
>>> The xfs filesystem can exceed the current lockdep 
>>> MAX_LOCK_DEPTH, because when deleting an entire cluster of inodes,
>>> they all get locked in xfs_ifree_cluster().  The normal cluster
>>> size is 8192 bytes, and with the default (and minimum) inode size 
>>> of 256 bytes, that's up to 32 inodes that get locked.  Throw in a 
>>> few other locks along the way, and 40 seems enough to get me through
>>> all the tests in the xfsqa suite on 4k blocks.  (block sizes
>>> above 8K will still exceed this though, I think)
>> As 40 will still not be enough for people with larger block sizes, this
>> does not seems like a solid solution. Could XFS possibly batch in
>> smaller (fixed sized) chunks, or does that have significant down sides?
> 
> The problem is not filesystem block size, it's the xfs inode cluster buffer
> size / the size of the inodes that determines the lock depth. the common case
> is 8k/256 = 32 inodes in a buffer, and they all get looked during inode
> cluster writeback.

Right, but as I understand it, the cluster size *minimum* is the block
size; that's why I made reference to block size - 16k blocks would have
64 inodes per cluster, minimum, potentially all locked in these paths.
Just saying that today, larger blocks -> larger clusters -> more locks.

Even though MAX_LOCK_DEPTH of 40 may not accomodate these scenarios, at
least it would accomodate the most common case today...

Peter, unless there is some other reason to do so, changing xfs
performance behavior simply to satisfy lockdep limitations* doesn't seem
like the best plan.

I suppose one slightly flakey option would be for xfs to see whether
lockdep is enabled and adjust cluster size based on MAX_LOCK_DEPTH... on
the argument that lockdep is likely used in debugging kernels where
sheer performance is less important... but, that sounds pretty flakey to me.

-Eric

*and I don't mean that in a pejorative sense; just the fact that some
max depth must be chosen - the literal "limitation."

  reply	other threads:[~2007-08-31 14:33 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-08-31  4:43 [PATCH] Increase lockdep MAX_LOCK_DEPTH Eric Sandeen
2007-08-31  6:39 ` Peter Zijlstra
2007-08-31 13:50   ` David Chinner
2007-08-31 14:33     ` Eric Sandeen [this message]
2007-08-31 14:36       ` Peter Zijlstra
2007-08-31 14:33     ` Peter Zijlstra
2007-08-31 15:05       ` David Chinner
2007-08-31 15:09         ` Peter Zijlstra
2007-08-31 15:11           ` Eric Sandeen
2007-08-31 15:19           ` David Chinner
2007-08-31 16:33           ` Josef Sipek

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=46D826BA.1060705@sandeen.net \
    --to=sandeen@sandeen.net \
    --cc=dgc@sgi.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=peterz@infradead.org \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox