From: Nick Piggin <npiggin@suse.de>
To: Andi Kleen <andi@firstfloor.org>
Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [patch 09/14] fs: use RCU / seqlock logic for reverse and multi-step operaitons
Date: Mon, 30 Mar 2009 14:29:02 +0200 [thread overview]
Message-ID: <20090330122902.GG31000@wotan.suse.de> (raw)
In-Reply-To: <874oxbnr2m.fsf@basil.nowhere.org>
On Mon, Mar 30, 2009 at 02:16:49PM +0200, Andi Kleen wrote:
> npiggin@suse.de writes:
>
> > The remaining usages for dcache_lock is to allow atomic, multi-step read-side
> > operations over the directory tree by excluding modifications to the tree.
> > Also, to walk in the leaf->root direction in the tree where we don't have
> > a natural d_lock ordering. This is the hardest bit.
>
> General thoughts: is there a way to add a self testing infrastructure
> to this. e.g. by having more sequence counts per object (only enabled
> in the debug case, so it doesn't matter when cache line bounces) and lots of
> checks?
>
> I suppose that would lower the work needed of actually fixing this to
> work significantly.
Might be a good idea. I'll think about whether it can be done.
Note that I *think* the idea is pretty sound, but I'm just not
quite sure about checking for parent being deleted when we're
walking back up the tree -- d_unhashed() doesn't seem to work
because we can encounter unhashed parents by design. We might
just need another d_flag...
next prev parent reply other threads:[~2009-03-30 12:29 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-03-29 15:55 [rfc] scale dcache locking npiggin
2009-03-29 15:55 ` [patch 01/14] fs: dcache fix LRU ordering npiggin
2009-03-29 15:55 ` [patch 02/14] fs: dcache scale hash npiggin
2009-03-29 15:55 ` [patch 03/14] fs: dcache scale lru npiggin
2009-03-29 15:55 ` [patch 04/14] fs: dcache scale nr_dentry npiggin
2009-03-29 15:55 ` [patch 05/14] fs: dcache scale dentry refcount npiggin
2009-03-29 15:55 ` [patch 06/14] fs: dcache scale d_unhashed npiggin
2009-03-29 15:55 ` [patch 07/14] fs: dcache scale subdirs npiggin
2009-03-29 15:55 ` [patch 08/14] fs: scale inode alias list npiggin
2009-03-30 12:18 ` Andi Kleen
2009-03-30 12:31 ` Nick Piggin
2009-03-29 15:55 ` [patch 09/14] fs: use RCU / seqlock logic for reverse and multi-step operaitons npiggin
2009-03-30 12:16 ` Andi Kleen
2009-03-30 12:29 ` Nick Piggin [this message]
2009-03-30 12:43 ` Andi Kleen
2009-03-29 15:55 ` [patch 10/14] fs: dcache remove dcache_lock npiggin
2009-03-29 15:55 ` [patch 11/14] fs: dcache reduce dput locking npiggin
2009-03-29 15:55 ` [patch 12/14] fs: dcache per-bucket dcache hash locking npiggin
2009-03-30 12:14 ` Andi Kleen
2009-03-30 12:27 ` Nick Piggin
2009-03-30 12:47 ` Andi Kleen
2009-03-30 12:59 ` Nick Piggin
2009-03-30 18:00 ` Christoph Hellwig
2009-03-31 1:57 ` Nick Piggin
2009-03-29 15:55 ` [patch 13/14] fs: dcache reduce dcache_inode_lock npiggin
2009-03-29 15:55 ` [patch 14/14] fs: dcache per-inode inode alias locking npiggin
2009-04-01 14:23 ` [rfc] scale dcache locking Al Viro
2009-04-02 9:43 ` Nick Piggin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20090330122902.GG31000@wotan.suse.de \
--to=npiggin@suse.de \
--cc=andi@firstfloor.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).