From: Michal Hocko <mhocko@suse.com>
To: Christian Brauner <brauner@kernel.org>
Cc: Thomas Graf <tgraf@suug.ch>,
Herbert Xu <herbert@gondor.apana.org.au>,
Andrew Morton <akpm@linux-foundation.org>,
Vlastimil Babka <vbabka@kernel.org>,
Lorenzo Stoakes <ljs@kernel.org>,
David Hildenbrand <david@kernel.org>,
Suren Baghdasaryan <surenb@google.com>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
linux-fsdevel@vger.kernel.org,
syzbot+5af806780f38a5fe691f@syzkaller.appspotmail.com
Subject: Re: [PATCH] rhashtable: give each instance its own lockdep class
Date: Mon, 27 Apr 2026 15:01:33 +0200 [thread overview]
Message-ID: <ae9eLXK72grTV2wd@tiehlicka> (raw)
In-Reply-To: <20260427-work-rhashtable-lockdep-v1-1-f69e8bd91cb2@kernel.org>
On Mon 27-04-26 13:09:57, Christian Brauner wrote:
> syzbot reported a possible circular locking dependency between
> &ht->mutex and fs_reclaim:
>
> CPU0 (kswapd0) CPU1 (kworker)
> -------------- --------------
> fs_reclaim ht->mutex
> shmem_evict_inode rhashtable_rehash_alloc
> simple_xattrs_free bucket_table_alloc(GFP_KERNEL)
> rhashtable_free_and_destroy __kvmalloc_node
> mutex_lock(&ht->mutex) might_alloc -> fs_reclaim
>
> The two halves of the splat refer to two different events on
> &ht->mutex.
>
> The kswapd0 path is unambiguous: shmem_evict_inode at mm/shmem.c:1429
> calls simple_xattrs_free(), which calls rhashtable_free_and_destroy()
> on the per-inode simple_xattrs rhashtable being torn down with the
> inode.
>
> The previously-recorded ht->mutex -> fs_reclaim edge comes from
> rht_deferred_worker -> rhashtable_rehash_alloc ->
> bucket_table_alloc(GFP_KERNEL) -> __kvmalloc_node ->
> might_alloc -> fs_reclaim. That stack stops at generic library code:
> there is no subsystem-specific frame above rht_deferred_worker, so
> the splat does not identify which rhashtable's worker recorded the
> edge -- only that some rhashtable in the system did.
>
> Whether or not that recording happened on the same simple_xattrs ht
> that is now being destroyed, the predicted deadlock cannot occur:
> rhashtable_free_and_destroy() does cancel_work_sync(&ht->run_work)
> before taking ht->mutex, so the deferred worker cannot be running on
> the instance being torn down. If the recording was on a different
> rhashtable instance, the two ht->mutex acquisitions are on distinct
> mutex objects and cannot deadlock either.
>
> Lockdep flags a cycle regardless because mutex_init(&ht->mutex) lives
> on a single source line in rhashtable_init_noprof(), so every
> ht->mutex in the kernel shares one static lockdep class. Lockdep
> matches by class, not by instance, and collapses all of these into
> one node.
>
> Lift the lockdep key out of rhashtable_init_noprof() and into the
> caller. The user-visible rhashtable_init_noprof() /
> rhltable_init_noprof() identifiers become macros that declare a
> per-call-site static lock_class_key.
>
> Reported-by: syzbot+5af806780f38a5fe691f@syzkaller.appspotmail.com
> Closes: https://lore.kernel.org/69e798fe.050a0220.24bfd3.0032.GAE@google.com
> Signed-off-by: Christian Brauner <brauner@kernel.org>
Acked-by: Michal Hocko <mhocko@suse.com>
Thanks!
> ---
> include/linux/rhashtable-types.h | 22 ++++++++++++++++++----
> lib/rhashtable.c | 17 ++++++++++-------
> 2 files changed, 28 insertions(+), 11 deletions(-)
>
> diff --git a/include/linux/rhashtable-types.h b/include/linux/rhashtable-types.h
> index 015c8298bebc..841021c67d3d 100644
> --- a/include/linux/rhashtable-types.h
> +++ b/include/linux/rhashtable-types.h
> @@ -131,12 +131,26 @@ struct rhashtable_iter {
> bool end_of_table;
> };
>
> -int rhashtable_init_noprof(struct rhashtable *ht,
> - const struct rhashtable_params *params);
> +int __rhashtable_init_noprof(struct rhashtable *ht,
> + const struct rhashtable_params *params,
> + struct lock_class_key *key);
> +#define rhashtable_init_noprof(ht, params) \
> +({ \
> + static struct lock_class_key __key; \
> + \
> + __rhashtable_init_noprof(ht, params, &__key); \
> +})
> #define rhashtable_init(...) alloc_hooks(rhashtable_init_noprof(__VA_ARGS__))
>
> -int rhltable_init_noprof(struct rhltable *hlt,
> - const struct rhashtable_params *params);
> +int __rhltable_init_noprof(struct rhltable *hlt,
> + const struct rhashtable_params *params,
> + struct lock_class_key *key);
> +#define rhltable_init_noprof(hlt, params) \
> +({ \
> + static struct lock_class_key __key; \
> + \
> + __rhltable_init_noprof(hlt, params, &__key); \
> +})
> #define rhltable_init(...) alloc_hooks(rhltable_init_noprof(__VA_ARGS__))
>
> #endif /* _LINUX_RHASHTABLE_TYPES_H */
> diff --git a/lib/rhashtable.c b/lib/rhashtable.c
> index 6074ed5f66f3..fb13749d824a 100644
> --- a/lib/rhashtable.c
> +++ b/lib/rhashtable.c
> @@ -1025,8 +1025,9 @@ static u32 rhashtable_jhash2(const void *key, u32 length, u32 seed)
> * .obj_hashfn = my_hash_fn,
> * };
> */
> -int rhashtable_init_noprof(struct rhashtable *ht,
> - const struct rhashtable_params *params)
> +int __rhashtable_init_noprof(struct rhashtable *ht,
> + const struct rhashtable_params *params,
> + struct lock_class_key *key)
> {
> struct bucket_table *tbl;
> size_t size;
> @@ -1036,7 +1037,7 @@ int rhashtable_init_noprof(struct rhashtable *ht,
> return -EINVAL;
>
> memset(ht, 0, sizeof(*ht));
> - mutex_init(&ht->mutex);
> + mutex_init_with_key(&ht->mutex, key);
> spin_lock_init(&ht->lock);
> memcpy(&ht->p, params, sizeof(*params));
>
> @@ -1087,7 +1088,7 @@ int rhashtable_init_noprof(struct rhashtable *ht,
>
> return 0;
> }
> -EXPORT_SYMBOL_GPL(rhashtable_init_noprof);
> +EXPORT_SYMBOL_GPL(__rhashtable_init_noprof);
>
> /**
> * rhltable_init - initialize a new hash list table
> @@ -1098,15 +1099,17 @@ EXPORT_SYMBOL_GPL(rhashtable_init_noprof);
> *
> * See documentation for rhashtable_init.
> */
> -int rhltable_init_noprof(struct rhltable *hlt, const struct rhashtable_params *params)
> +int __rhltable_init_noprof(struct rhltable *hlt,
> + const struct rhashtable_params *params,
> + struct lock_class_key *key)
> {
> int err;
>
> - err = rhashtable_init_noprof(&hlt->ht, params);
> + err = __rhashtable_init_noprof(&hlt->ht, params, key);
> hlt->ht.rhlist = true;
> return err;
> }
> -EXPORT_SYMBOL_GPL(rhltable_init_noprof);
> +EXPORT_SYMBOL_GPL(__rhltable_init_noprof);
>
> static void rhashtable_free_one(struct rhashtable *ht, struct rhash_head *obj,
> void (*free_fn)(void *ptr, void *arg),
>
> ---
> base-commit: 6596a02b207886e9e00bb0161c7fd59fea53c081
> change-id: 20260427-work-rhashtable-lockdep-cb0356367073
--
Michal Hocko
SUSE Labs
prev parent reply other threads:[~2026-04-27 13:01 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-27 11:09 [PATCH] rhashtable: give each instance its own lockdep class Christian Brauner
2026-04-27 11:29 ` Herbert Xu
2026-04-27 12:21 ` Christian Brauner
2026-04-27 12:51 ` Mikhail Gavrilov
2026-04-27 11:31 ` Andrew Morton
2026-04-27 11:34 ` Herbert Xu
2026-04-27 13:01 ` Michal Hocko [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ae9eLXK72grTV2wd@tiehlicka \
--to=mhocko@suse.com \
--cc=akpm@linux-foundation.org \
--cc=brauner@kernel.org \
--cc=david@kernel.org \
--cc=herbert@gondor.apana.org.au \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ljs@kernel.org \
--cc=surenb@google.com \
--cc=syzbot+5af806780f38a5fe691f@syzkaller.appspotmail.com \
--cc=tgraf@suug.ch \
--cc=vbabka@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox