From: Al Viro <viro@ZenIV.linux.org.uk>
To: Jeff Layton <jlayton@poochiereds.net>
Cc: linux-nfs@vger.kernel.org, linux-fsdevel@vger.kernel.org,
Trond Myklebust <trond.myklebust@primarydata.com>,
Linus Torvalds <torvalds@linux-foundation.org>
Subject: Re: parallel lookups on NFS
Date: Sun, 24 Apr 2016 20:18:35 +0100 [thread overview]
Message-ID: <20160424191835.GL25498@ZenIV.linux.org.uk> (raw)
In-Reply-To: <1461501975.5219.40.camel@poochiereds.net>
On Sun, Apr 24, 2016 at 08:46:15AM -0400, Jeff Layton wrote:
> > Suggestions?��Right now my local tree has nfs_lookup() and
> > nfs_readdir() run with directory locked shared.��And they are still
> > serialized by the damn ->silly_count ;-/
> >
>
> Hmm...well, most of that was added in commit 565277f63c61. Looking at
> the bug referenced in that commit log, I think that the main thing we
> want to ensure is that rmdir waits until all of the sillydeletes for
> files in its directory have finished.
>
> But...there's also some code to ensure that if a lookup races in while
> we're doing the sillydelete that we transfer all of the dentry info to
> the new alias. That's the messy part.
It's a bit more - AFAICS, it also wants to prevent lookup coming after we'd
started with building an unlink request and getting serviced before that
unlink.
> The new infrastructure for parallel lookups might make it simpler
> actually. When we go to do the sillydelete, could we add the dentry to
> the "lookup in progress" hash that you're adding as part of the
> parallel lookup infrastructure? Then the tasks doing lookups could find
> it and just wait on the sillydelete to complete. After the sillydelete,
> we'd turn the thing into a negative dentry and then wake up the waiters
> (if any). That does make the whole dentry teardown a bit more complex
> though.
Umm... A lot more complex, unfortunately - if anything, I would allocate
a _new_ dentry at sillyrename time and used it pretty much instead of
your nfs_unlinkdata. Unhashed, negative and pinned down. And insert it
into in-lookup hash only when we get around to actual delayed unlink.
First of all, dentries in in-lookup hash *must* be negative before they can
be inserted there. That wouldn't be so much PITA per se, but we also have
a problem with the sucker possibly being on a shrink list and only the owner
of the shrink list can remove it from there. So this kind of reuse would be
hard to arrange.
Do we want to end up with a negative hashed after that unlink, though?
Sure, we know it's negative, but will anyone really try to look it up
afterwards? IOW, is it anything but a hash pollution? What's more,
unlike in-lookup hash, there are assumptions about the primary one; namely,
directory-modifying operation can be certain that nobody else will be adding
entries to hash behind its back, positive or negative.
I'm not at all sure that NFS doesn't rely upon this. The in-lookup hash
has no such warranties (and still very few users, of course), so we can
afford adding stuff to it without bothering with locking the parent directory.
_IF_ we don't try to add anything to primary hash, we can bloody well use it
without ->i_rwsem on the parent.
AFAICS, ->args is redundant if you have the sucker with the right name/parent.
So's ->dir; the rest is probably still needed, so a shrunk nfs_unlinkdata
would still be needed, more's the pity... Well, we can point ->d_fsdata of
the replacement dentry to set to it.
And yes, it means allocation in two pieces instead of just one when we hit
sillyrename. Seeing that it's doing at least two RPC roundtrips right in
nfs_sillyrename(), I think that overhead isn't a serious worry.
What we get out of that is fully parallel lookup/readdir/sillyunlink - all
exclusion is on per-name basis (nfs_prime_dcache() vs. nfs_lookup() vs.
nfs_do_call_unlink()). It will require a bit of care in atomic_open(),
though...
I'll play with that a bit and see what can be done...
next prev parent reply other threads:[~2016-04-24 19:18 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-04-24 2:34 parallel lookups on NFS Al Viro
2016-04-24 12:46 ` Jeff Layton
2016-04-24 19:18 ` Al Viro [this message]
2016-04-24 20:51 ` Jeff Layton
2016-04-29 7:58 ` Al Viro
2016-04-30 13:15 ` Jeff Layton
2016-04-30 13:22 ` Jeff Layton
2016-04-30 14:22 ` Al Viro
2016-04-30 14:43 ` Jeff Layton
2016-04-30 18:58 ` Al Viro
2016-04-30 19:29 ` Al Viro
[not found] ` <1462048765.10011.44.camel@poochiereds.net>
2016-04-30 20:57 ` Al Viro
2016-04-30 22:17 ` Jeff Layton
2016-04-30 22:33 ` Jeff Layton
2016-04-30 23:31 ` Al Viro
2016-05-01 0:02 ` Al Viro
2016-05-01 0:18 ` Al Viro
2016-05-01 1:08 ` Al Viro
2016-05-01 13:35 ` Jeff Layton
2016-04-30 23:23 ` Jeff Layton
2016-04-30 23:29 ` Jeff Layton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160424191835.GL25498@ZenIV.linux.org.uk \
--to=viro@zeniv.linux.org.uk \
--cc=jlayton@poochiereds.net \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-nfs@vger.kernel.org \
--cc=torvalds@linux-foundation.org \
--cc=trond.myklebust@primarydata.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).