From: Al Viro <viro@zeniv.linux.org.uk>
To: Linus Torvalds <torvalds@linux-foundation.org>
Cc: linux-fsdevel@vger.kernel.org
Subject: lockless case of retain_dentry() (was Re: [PATCH 09/15] fold the call of retain_dentry() into fast_dput())
Date: Fri, 10 Nov 2023 04:20:41 +0000 [thread overview]
Message-ID: <20231110042041.GL1957730@ZenIV> (raw)
In-Reply-To: <20231101181910.GH1957730@ZenIV>
On Wed, Nov 01, 2023 at 06:19:10PM +0000, Al Viro wrote:
> gcc-12 on x86 turns the series of ifs into
> movl %edi, %eax
> andl $32832, %eax
> cmpl $32832, %eax
> jne .L17
> andl $168, %edi
> jne .L17
> instead of combining that into
> andl $33000, %edi
> cmpl $32832, %edi
> jne .L17
>
> OTOH, that's not much of pessimization... Up to you.
FWIW, on top of current #work.dcache2 the following delta might be worth
looking into. Not sure if it's less confusing that way, though - I'd been staring
at that place for too long. Code generation is slightly suboptimal with recent
gcc, but only marginally so.
diff --git a/fs/dcache.c b/fs/dcache.c
index bd57b9a08894..9e1486db64a7 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -665,30 +665,57 @@ static bool lock_for_kill(struct dentry *dentry)
return false;
}
-static inline bool retain_dentry(struct dentry *dentry)
+/*
+ * Decide if dentry is worth retaining. Usually this is called with dentry
+ * locked; if not locked, we are more limited and might not be able to tell
+ * without a lock. False in this case means "punt to locked path and recheck".
+ *
+ * In case we aren't locked, these predicates are not "stable". However, it is
+ * sufficient that at some point after we dropped the reference the dentry was
+ * hashed and the flags had the proper value. Other dentry users may have
+ * re-gotten a reference to the dentry and change that, but our work is done -
+ * we can leave the dentry around with a zero refcount.
+ */
+static inline bool retain_dentry(struct dentry *dentry, bool locked)
{
- WARN_ON(d_in_lookup(dentry));
+ unsigned int d_flags;
- /* Unreachable? Get rid of it */
+ smp_rmb();
+ d_flags = READ_ONCE(dentry->d_flags);
+
+ // Unreachable? Nobody would be able to look it up, no point retaining
if (unlikely(d_unhashed(dentry)))
return false;
- if (unlikely(dentry->d_flags & DCACHE_DISCONNECTED))
+ // Same if it's disconnected
+ if (unlikely(d_flags & DCACHE_DISCONNECTED))
return false;
- if (unlikely(dentry->d_flags & DCACHE_OP_DELETE)) {
- if (dentry->d_op->d_delete(dentry))
+ // ->d_delete() might tell us not to bother, but that requires
+ // ->d_lock; can't decide without it
+ if (unlikely(d_flags & DCACHE_OP_DELETE)) {
+ if (!locked || dentry->d_op->d_delete(dentry))
return false;
}
- if (unlikely(dentry->d_flags & DCACHE_DONTCACHE))
+ // Explicitly told not to bother
+ if (unlikely(d_flags & DCACHE_DONTCACHE))
return false;
- /* retain; LRU fodder */
- if (unlikely(!(dentry->d_flags & DCACHE_LRU_LIST)))
+ // At this point it looks like we ought to keep it. We also might
+ // need to do something - put it on LRU if it wasn't there already
+ // and mark it referenced if it was on LRU, but not marked yet.
+ // Unfortunately, both actions require ->d_lock, so in lockless
+ // case we'd have to punt rather than doing those.
+ if (unlikely(!(d_flags & DCACHE_LRU_LIST))) {
+ if (!locked)
+ return false;
d_lru_add(dentry);
- else if (unlikely(!(dentry->d_flags & DCACHE_REFERENCED)))
+ } else if (unlikely(!(d_flags & DCACHE_REFERENCED))) {
+ if (!locked)
+ return false;
dentry->d_flags |= DCACHE_REFERENCED;
+ }
return true;
}
@@ -720,7 +747,6 @@ EXPORT_SYMBOL(d_mark_dontcache);
static inline bool fast_dput(struct dentry *dentry)
{
int ret;
- unsigned int d_flags;
/*
* try to decrement the lockref optimistically.
@@ -749,45 +775,18 @@ static inline bool fast_dput(struct dentry *dentry)
return true;
/*
- * Careful, careful. The reference count went down
- * to zero, but we don't hold the dentry lock, so
- * somebody else could get it again, and do another
- * dput(), and we need to not race with that.
- *
- * However, there is a very special and common case
- * where we don't care, because there is nothing to
- * do: the dentry is still hashed, it does not have
- * a 'delete' op, and it's referenced and already on
- * the LRU list.
- *
- * NOTE! Since we aren't locked, these values are
- * not "stable". However, it is sufficient that at
- * some point after we dropped the reference the
- * dentry was hashed and the flags had the proper
- * value. Other dentry users may have re-gotten
- * a reference to the dentry and change that, but
- * our work is done - we can leave the dentry
- * around with a zero refcount.
- *
- * Nevertheless, there are two cases that we should kill
- * the dentry anyway.
- * 1. free disconnected dentries as soon as their refcount
- * reached zero.
- * 2. free dentries if they should not be cached.
+ * Can we decide that decrement of refcount is all we needed without
+ * taking the lock? There's a very common case when it's all we need -
+ * dentry looks like it ought to be retained and there's nothing else
+ * to do.
*/
- smp_rmb();
- d_flags = READ_ONCE(dentry->d_flags);
- d_flags &= DCACHE_REFERENCED | DCACHE_LRU_LIST | DCACHE_OP_DELETE |
- DCACHE_DISCONNECTED | DCACHE_DONTCACHE;
-
- /* Nothing to do? Dropping the reference was all we needed? */
- if (d_flags == (DCACHE_REFERENCED | DCACHE_LRU_LIST) && !d_unhashed(dentry))
+ if (retain_dentry(dentry, false))
return true;
/*
- * Not the fast normal case? Get the lock. We've already decremented
- * the refcount, but we'll need to re-check the situation after
- * getting the lock.
+ * Either not worth retaining or we can't tell without the lock.
+ * Get the lock, then. We've already decremented the refcount to 0,
+ * but we'll need to re-check the situation after getting the lock.
*/
spin_lock(&dentry->d_lock);
@@ -798,7 +797,7 @@ static inline bool fast_dput(struct dentry *dentry)
* don't need to do anything else.
*/
locked:
- if (dentry->d_lockref.count || retain_dentry(dentry)) {
+ if (dentry->d_lockref.count || retain_dentry(dentry, true)) {
spin_unlock(&dentry->d_lock);
return true;
}
@@ -847,7 +846,7 @@ void dput(struct dentry *dentry)
dentry = __dentry_kill(dentry);
if (!dentry)
return;
- if (retain_dentry(dentry)) {
+ if (retain_dentry(dentry, true)) {
spin_unlock(&dentry->d_lock);
return;
}
next prev parent reply other threads:[~2023-11-10 6:22 UTC|newest]
Thread overview: 119+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-10-30 0:37 [RFC] simplifying fast_dput(), dentry_kill() et.al Al Viro
2023-10-30 21:53 ` Al Viro
2023-10-30 22:18 ` Linus Torvalds
2023-10-31 0:18 ` Al Viro
2023-10-31 1:53 ` Al Viro
2023-10-31 6:12 ` Al Viro
2023-11-01 6:18 ` Al Viro
2023-11-01 6:20 ` [PATCH 01/15] fast_dput(): having ->d_delete() is not reason to delay refcount decrement Al Viro
2023-11-01 6:20 ` [PATCH 02/15] fast_dput(): handle underflows gracefully Al Viro
2023-11-01 6:20 ` [PATCH 03/15] fast_dput(): new rules for refcount Al Viro
2023-11-01 6:20 ` [PATCH 04/15] __dput_to_list(): do decrement of refcount in the caller Al Viro
2023-11-01 6:20 ` [PATCH 05/15] retain_dentry(): lift decrement of ->d_count into callers Al Viro
2023-11-01 6:20 ` [PATCH 06/15] __dentry_kill(): get consistent rules for ->d_count Al Viro
2023-11-01 6:20 ` [PATCH 07/15] dentry_kill(): don't bother with retain_dentry() on slow path Al Viro
2023-11-01 6:20 ` [PATCH 08/15] Call retain_dentry() with refcount 0 Al Viro
2023-11-01 6:20 ` [PATCH 09/15] fold the call of retain_dentry() into fast_dput() Al Viro
2023-11-01 8:45 ` Al Viro
2023-11-01 17:30 ` Linus Torvalds
2023-11-01 18:19 ` Al Viro
2023-11-10 4:20 ` Al Viro [this message]
2023-11-10 5:57 ` lockless case of retain_dentry() (was Re: [PATCH 09/15] fold the call of retain_dentry() into fast_dput()) Linus Torvalds
2023-11-10 6:22 ` Linus Torvalds
2023-11-22 6:29 ` Guo Ren
2023-11-10 8:19 ` Al Viro
2023-11-22 7:19 ` Guo Ren
2023-11-22 17:20 ` Linus Torvalds
2023-11-22 17:52 ` Linus Torvalds
2023-11-22 18:05 ` Linus Torvalds
2023-11-22 19:11 ` Linus Torvalds
2023-11-29 7:14 ` Guo Ren
2023-11-29 12:25 ` Guo Ren
2023-11-29 14:42 ` Linus Torvalds
2023-11-26 16:39 ` Guo Ren
2023-11-26 16:51 ` Linus Torvalds
2023-11-30 10:00 ` Guo Ren
2023-12-01 1:09 ` Linus Torvalds
2023-12-01 3:36 ` Guo Ren
2023-12-01 5:15 ` Linus Torvalds
2023-12-01 7:31 ` Guo Ren
2023-11-26 16:51 ` Guo Ren
2023-11-26 17:06 ` Linus Torvalds
2023-11-26 17:59 ` Linus Torvalds
2023-11-29 9:52 ` Guo Ren
2023-11-01 6:20 ` [PATCH 10/15] don't try to cut corners in shrink_lock_dentry() Al Viro
2023-11-01 6:21 ` [PATCH 11/15] fold dentry_kill() into dput() Al Viro
2023-11-01 6:21 ` [PATCH 12/15] get rid of __dget() Al Viro
2023-11-01 6:21 ` [PATCH 13/15] shrink_dentry_list(): no need to check that dentry refcount is marked dead Al Viro
2023-11-01 6:21 ` [PATCH 14/15] to_shrink_list(): call only if refcount is 0 Al Viro
2023-11-01 6:21 ` [PATCH 15/15] switch select_collect{,2}() to use of to_shrink_list() Al Viro
2023-11-01 2:22 ` [RFC] simplifying fast_dput(), dentry_kill() et.al Al Viro
2023-11-01 14:29 ` Benjamin Coddington
2023-11-05 19:54 ` Al Viro
2023-11-05 21:59 ` Al Viro
2023-11-06 5:53 ` Al Viro
2023-11-07 2:08 ` Al Viro
2023-11-09 6:19 ` [RFC][PATCHSET v2] " Al Viro
2023-11-09 6:20 ` [PATCH 01/22] struct dentry: get rid of randomize_layout idiocy Al Viro
2023-11-09 6:20 ` [PATCH 02/22] switch nfsd_client_rmdir() to use of simple_recursive_removal() Al Viro
2023-11-09 13:42 ` Christian Brauner
2023-11-09 14:01 ` Chuck Lever
2023-11-09 18:47 ` Al Viro
2023-11-09 18:50 ` Chuck Lever III
2023-11-09 6:20 ` [PATCH 03/22] coda_flag_children(): cope with dentries turning negative Al Viro
2023-11-09 13:43 ` Christian Brauner
2023-11-09 6:20 ` [PATCH 04/22] dentry: switch the lists of children to hlist Al Viro
2023-11-09 13:48 ` Christian Brauner
2023-11-09 19:32 ` Al Viro
2023-11-09 6:20 ` [PATCH 05/22] centralize killing dentry from shrink list Al Viro
2023-11-09 13:49 ` Christian Brauner
2023-11-09 6:20 ` [PATCH 06/22] get rid of __dget() Al Viro
2023-11-09 13:50 ` Christian Brauner
2023-11-09 6:20 ` [PATCH 07/22] shrink_dentry_list(): no need to check that dentry refcount is marked dead Al Viro
2023-11-09 13:53 ` Christian Brauner
2023-11-09 20:28 ` Al Viro
2023-11-09 6:20 ` [PATCH 08/22] fast_dput(): having ->d_delete() is not reason to delay refcount decrement Al Viro
2023-11-09 13:58 ` Christian Brauner
2023-11-09 6:20 ` [PATCH 09/22] fast_dput(): handle underflows gracefully Al Viro
2023-11-09 14:46 ` Christian Brauner
2023-11-09 20:39 ` Al Viro
2023-11-09 6:20 ` [PATCH 10/22] fast_dput(): new rules for refcount Al Viro
2023-11-09 14:54 ` Christian Brauner
2023-11-09 20:52 ` Al Viro
2023-11-09 6:20 ` [PATCH 11/22] __dput_to_list(): do decrement of refcount in the callers Al Viro
2023-11-09 15:21 ` Christian Brauner
2023-11-09 6:20 ` [PATCH 12/22] Make retain_dentry() neutral with respect to refcounting Al Viro
2023-11-09 15:22 ` Christian Brauner
2023-11-09 6:20 ` [PATCH 13/22] __dentry_kill(): get consistent rules for victim's refcount Al Viro
2023-11-09 15:27 ` Christian Brauner
2023-11-09 6:20 ` [PATCH 14/22] dentry_kill(): don't bother with retain_dentry() on slow path Al Viro
2023-11-09 15:53 ` Christian Brauner
2023-11-09 21:29 ` Al Viro
2023-11-09 6:20 ` [PATCH 15/22] Call retain_dentry() with refcount 0 Al Viro
2023-11-09 16:09 ` Christian Brauner
2023-11-09 6:20 ` [PATCH 16/22] fold the call of retain_dentry() into fast_dput() Al Viro
2023-11-09 16:17 ` Christian Brauner
2023-11-09 6:20 ` [PATCH 17/22] don't try to cut corners in shrink_lock_dentry() Al Viro
2023-11-09 17:20 ` Christian Brauner
2023-11-09 21:45 ` Al Viro
2023-11-10 9:07 ` Christian Brauner
2023-11-09 17:39 ` Linus Torvalds
2023-11-09 18:11 ` Linus Torvalds
2023-11-09 18:20 ` Al Viro
2023-11-09 6:20 ` [PATCH 18/22] fold dentry_kill() into dput() Al Viro
2023-11-09 17:22 ` Christian Brauner
2023-11-09 6:20 ` [PATCH 19/22] to_shrink_list(): call only if refcount is 0 Al Viro
2023-11-09 17:29 ` Christian Brauner
2023-11-09 6:20 ` [PATCH 20/22] switch select_collect{,2}() to use of to_shrink_list() Al Viro
2023-11-09 17:31 ` Christian Brauner
2023-11-09 6:20 ` [PATCH 21/22] d_prune_aliases(): use a shrink list Al Viro
2023-11-09 17:33 ` Christian Brauner
2023-11-09 6:20 ` [PATCH 22/22] __dentry_kill(): new locking scheme Al Viro
2023-11-10 13:34 ` Christian Brauner
2023-11-09 13:33 ` [PATCH 01/22] struct dentry: get rid of randomize_layout idiocy Christian Brauner
2023-10-31 2:25 ` [RFC] simplifying fast_dput(), dentry_kill() et.al Gao Xiang
2023-10-31 2:29 ` Gao Xiang
2023-10-31 3:02 ` Linus Torvalds
2023-10-31 3:13 ` Gao Xiang
2023-10-31 3:26 ` Al Viro
2023-10-31 3:41 ` Linus Torvalds
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231110042041.GL1957730@ZenIV \
--to=viro@zeniv.linux.org.uk \
--cc=linux-fsdevel@vger.kernel.org \
--cc=torvalds@linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).