public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: David Chinner <dgc@sgi.com>
Cc: Eric Sandeen <sandeen@sandeen.net>,
	linux-kernel Mailing List <linux-kernel@vger.kernel.org>,
	xfs-oss <xfs@oss.sgi.com>, Ingo Molnar <mingo@elte.hu>
Subject: Re: [PATCH] Increase lockdep MAX_LOCK_DEPTH
Date: Fri, 31 Aug 2007 16:33:51 +0200	[thread overview]
Message-ID: <1188570831.6112.64.camel@twins> (raw)
In-Reply-To: <20070831135042.GD422459@sgi.com>

On Fri, 2007-08-31 at 23:50 +1000, David Chinner wrote:
> On Fri, Aug 31, 2007 at 08:39:49AM +0200, Peter Zijlstra wrote:
> > On Thu, 2007-08-30 at 23:43 -0500, Eric Sandeen wrote:
> > > The xfs filesystem can exceed the current lockdep 
> > > MAX_LOCK_DEPTH, because when deleting an entire cluster of inodes,
> > > they all get locked in xfs_ifree_cluster().  The normal cluster
> > > size is 8192 bytes, and with the default (and minimum) inode size 
> > > of 256 bytes, that's up to 32 inodes that get locked.  Throw in a 
> > > few other locks along the way, and 40 seems enough to get me through
> > > all the tests in the xfsqa suite on 4k blocks.  (block sizes
> > > above 8K will still exceed this though, I think)
> > 
> > As 40 will still not be enough for people with larger block sizes, this
> > does not seems like a solid solution. Could XFS possibly batch in
> > smaller (fixed sized) chunks, or does that have significant down sides?
> 
> The problem is not filesystem block size, it's the xfs inode cluster buffer
> size / the size of the inodes that determines the lock depth. the common case
> is 8k/256 = 32 inodes in a buffer, and they all get looked during inode
> cluster writeback.
> 
> This inode writeback clustering is one of the reasons XFS doesn't suffer from
> atime issues as much as other filesystems - it doesn't need to do as much I/O
> to write back dirty inodes to disk.
> 
> IOWs, we are not going to make the inode clusters smallers - if anything they
> are going to get *larger* in future so we do less I/O during inode writeback
> than we do now.....

Since they are all trylocks it seems to suggest there is no hard _need_
to lock a whole inode cluster at once, and could iterate through it with
less inodes locked.

Granted I have absolutely no understanding of what I'm talking about :-)

Trouble is, we'd like to have a sane upper bound on the amount of held
locks at any one time, obviously this is just wanting, because a lot of
lock chains also depend on the number of online cpus...

  parent reply	other threads:[~2007-08-31 14:34 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-08-31  4:43 [PATCH] Increase lockdep MAX_LOCK_DEPTH Eric Sandeen
2007-08-31  6:39 ` Peter Zijlstra
2007-08-31 13:50   ` David Chinner
2007-08-31 14:33     ` Eric Sandeen
2007-08-31 14:36       ` Peter Zijlstra
2007-08-31 14:33     ` Peter Zijlstra [this message]
2007-08-31 15:05       ` David Chinner
2007-08-31 15:09         ` Peter Zijlstra
2007-08-31 15:11           ` Eric Sandeen
2007-08-31 15:19           ` David Chinner
2007-08-31 16:33           ` Josef Sipek

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1188570831.6112.64.camel@twins \
    --to=peterz@infradead.org \
    --cc=dgc@sgi.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=sandeen@sandeen.net \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox