* [GIT PULL] [PATCH 0/3] LOCKDEP changes for v6.16
@ 2025-05-06 4:20 Boqun Feng
2025-05-06 4:20 ` [PATCH 1/3] lockdep: Move hlock_equal() to the respective ifdeffery Boqun Feng
` (3 more replies)
0 siblings, 4 replies; 8+ messages in thread
From: Boqun Feng @ 2025-05-06 4:20 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra
Cc: Ingo Molnar, Will Deacon, Boqun Feng, Waiman Long,
Nathan Chancellor, Nick Desaulniers, Bill Wendling, Justin Stitt,
linux-kernel, llvm
Hi Ingo & Peter,
Please pull the lockdep changes for v6.16 into tip. I'm sending the
changes in patchset, but I also created a tag and the pull-request
message in case you want to directly pull. Thanks!
Regards,
Boqun
The following changes since commit 35e6b537af85d97e0aafd8f2829dfa884a22df20:
lockdep: Remove disable_irq_lockdep() (2025-03-14 21:13:20 +0100)
are available in the Git repository at:
git@gitolite.kernel.org:pub/scm/linux/kernel/git/boqun/linux tags/lockdep-for-tip.2025.05.05
for you to fetch changes up to b3eec4e26ada8a71c40147b45026d2345f3b6ae3:
locking/lockdep: Add # of dynamic keys stat to /proc/lockdep_stats (2025-05-04 11:03:02 -0700)
----------------------------------------------------------------
Lockdep changes for v6.16:
- Move hlock_equal() only under CONFIG_LOCKDEP_SMALL=y
- Prevent abuse of lockdep subclass in __lock_acquire()
- Add # of dynamic keys stat to /proc/lockdep_stats
----------------------------------------------------------------
Andy Shevchenko (1):
lockdep: Move hlock_equal() to the respective ifdeffery
Waiman Long (2):
locking/lockdep: Prevent abuse of lockdep subclass
locking/lockdep: Add # of dynamic keys stat to /proc/lockdep_stats
kernel/locking/lockdep.c | 76 ++++++++++++++++++++------------------
kernel/locking/lockdep_internals.h | 1 +
kernel/locking/lockdep_proc.c | 2 +
3 files changed, 44 insertions(+), 35 deletions(-)
--
2.39.5 (Apple Git-154)
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 1/3] lockdep: Move hlock_equal() to the respective ifdeffery
2025-05-06 4:20 [GIT PULL] [PATCH 0/3] LOCKDEP changes for v6.16 Boqun Feng
@ 2025-05-06 4:20 ` Boqun Feng
2025-05-06 17:52 ` [tip: locking/core] locking/lockdep: Move hlock_equal() to the respective #ifdeffery tip-bot2 for Andy Shevchenko
2025-05-06 4:20 ` [PATCH 2/3] locking/lockdep: Prevent abuse of lockdep subclass Boqun Feng
` (2 subsequent siblings)
3 siblings, 1 reply; 8+ messages in thread
From: Boqun Feng @ 2025-05-06 4:20 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra
Cc: Ingo Molnar, Will Deacon, Boqun Feng, Waiman Long,
Nathan Chancellor, Nick Desaulniers, Bill Wendling, Justin Stitt,
linux-kernel, llvm, Andy Shevchenko
From: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
When hlock_equal() is unused, it prevents kernel builds with clang,
`make W=1` and CONFIG_WERROR=y, CONFIG_LOCKDEP=y and
CONFIG_LOCKDEP_SMALL=n:
lockdep.c:2005:20: error: unused function 'hlock_equal' [-Werror,-Wunused-function]
Fix this by moving the function to the respective existing ifdeffery
for its the only user.
See also commit 6863f5643dd7 ("kbuild: allow Clang to find unused static
inline functions for W=1 build").
Fixes: 68e305678583 ("lockdep: Adjust check_redundant() for recursive read change")
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://lore.kernel.org/r/20250415085857.495543-1-andriy.shevchenko@linux.intel.com
---
kernel/locking/lockdep.c | 70 ++++++++++++++++++++--------------------
1 file changed, 35 insertions(+), 35 deletions(-)
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index b15757e63626..ff2ce90a87bc 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -1976,41 +1976,6 @@ print_circular_bug_header(struct lock_list *entry, unsigned int depth,
print_circular_bug_entry(entry, depth);
}
-/*
- * We are about to add A -> B into the dependency graph, and in __bfs() a
- * strong dependency path A -> .. -> B is found: hlock_class equals
- * entry->class.
- *
- * If A -> .. -> B can replace A -> B in any __bfs() search (means the former
- * is _stronger_ than or equal to the latter), we consider A -> B as redundant.
- * For example if A -> .. -> B is -(EN)-> (i.e. A -(E*)-> .. -(*N)-> B), and A
- * -> B is -(ER)-> or -(EN)->, then we don't need to add A -> B into the
- * dependency graph, as any strong path ..-> A -> B ->.. we can get with
- * having dependency A -> B, we could already get a equivalent path ..-> A ->
- * .. -> B -> .. with A -> .. -> B. Therefore A -> B is redundant.
- *
- * We need to make sure both the start and the end of A -> .. -> B is not
- * weaker than A -> B. For the start part, please see the comment in
- * check_redundant(). For the end part, we need:
- *
- * Either
- *
- * a) A -> B is -(*R)-> (everything is not weaker than that)
- *
- * or
- *
- * b) A -> .. -> B is -(*N)-> (nothing is stronger than this)
- *
- */
-static inline bool hlock_equal(struct lock_list *entry, void *data)
-{
- struct held_lock *hlock = (struct held_lock *)data;
-
- return hlock_class(hlock) == entry->class && /* Found A -> .. -> B */
- (hlock->read == 2 || /* A -> B is -(*R)-> */
- !entry->only_xr); /* A -> .. -> B is -(*N)-> */
-}
-
/*
* We are about to add B -> A into the dependency graph, and in __bfs() a
* strong dependency path A -> .. -> B is found: hlock_class equals
@@ -2915,6 +2880,41 @@ static inline bool usage_skip(struct lock_list *entry, void *mask)
#endif /* CONFIG_TRACE_IRQFLAGS */
#ifdef CONFIG_LOCKDEP_SMALL
+/*
+ * We are about to add A -> B into the dependency graph, and in __bfs() a
+ * strong dependency path A -> .. -> B is found: hlock_class equals
+ * entry->class.
+ *
+ * If A -> .. -> B can replace A -> B in any __bfs() search (means the former
+ * is _stronger_ than or equal to the latter), we consider A -> B as redundant.
+ * For example if A -> .. -> B is -(EN)-> (i.e. A -(E*)-> .. -(*N)-> B), and A
+ * -> B is -(ER)-> or -(EN)->, then we don't need to add A -> B into the
+ * dependency graph, as any strong path ..-> A -> B ->.. we can get with
+ * having dependency A -> B, we could already get a equivalent path ..-> A ->
+ * .. -> B -> .. with A -> .. -> B. Therefore A -> B is redundant.
+ *
+ * We need to make sure both the start and the end of A -> .. -> B is not
+ * weaker than A -> B. For the start part, please see the comment in
+ * check_redundant(). For the end part, we need:
+ *
+ * Either
+ *
+ * a) A -> B is -(*R)-> (everything is not weaker than that)
+ *
+ * or
+ *
+ * b) A -> .. -> B is -(*N)-> (nothing is stronger than this)
+ *
+ */
+static inline bool hlock_equal(struct lock_list *entry, void *data)
+{
+ struct held_lock *hlock = (struct held_lock *)data;
+
+ return hlock_class(hlock) == entry->class && /* Found A -> .. -> B */
+ (hlock->read == 2 || /* A -> B is -(*R)-> */
+ !entry->only_xr); /* A -> .. -> B is -(*N)-> */
+}
+
/*
* Check that the dependency graph starting at <src> can lead to
* <target> or not. If it can, <src> -> <target> dependency is already
--
2.39.5 (Apple Git-154)
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 2/3] locking/lockdep: Prevent abuse of lockdep subclass
2025-05-06 4:20 [GIT PULL] [PATCH 0/3] LOCKDEP changes for v6.16 Boqun Feng
2025-05-06 4:20 ` [PATCH 1/3] lockdep: Move hlock_equal() to the respective ifdeffery Boqun Feng
@ 2025-05-06 4:20 ` Boqun Feng
2025-05-06 17:52 ` [tip: locking/core] " tip-bot2 for Waiman Long
2025-05-06 4:20 ` [PATCH 3/3] locking/lockdep: Add # of dynamic keys stat to /proc/lockdep_stats Boqun Feng
2025-05-06 17:23 ` [GIT PULL] [PATCH 0/3] LOCKDEP changes for v6.16 Ingo Molnar
3 siblings, 1 reply; 8+ messages in thread
From: Boqun Feng @ 2025-05-06 4:20 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra
Cc: Ingo Molnar, Will Deacon, Boqun Feng, Waiman Long,
Nathan Chancellor, Nick Desaulniers, Bill Wendling, Justin Stitt,
linux-kernel, llvm
From: Waiman Long <longman@redhat.com>
To catch the code trying to use a subclass value >=
MAX_LOCKDEP_SUBCLASSES (8), add a DEBUG_LOCKS_WARN_ON() statement to
notify the users that such a large value is not allowed.
[boqun: Reword the commit log with a more objective tone]
Signed-off-by: Waiman Long <longman@redhat.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://lore.kernel.org/r/20250409143751.2010391-1-longman@redhat.com
---
kernel/locking/lockdep.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index ff2ce90a87bc..58883c8375d1 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -5101,6 +5101,9 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass,
lockevent_inc(lockdep_nocheck);
}
+ if (DEBUG_LOCKS_WARN_ON(subclass >= MAX_LOCKDEP_SUBCLASSES))
+ return 0;
+
if (subclass < NR_LOCKDEP_CACHING_CLASSES)
class = lock->class_cache[subclass];
/*
--
2.39.5 (Apple Git-154)
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 3/3] locking/lockdep: Add # of dynamic keys stat to /proc/lockdep_stats
2025-05-06 4:20 [GIT PULL] [PATCH 0/3] LOCKDEP changes for v6.16 Boqun Feng
2025-05-06 4:20 ` [PATCH 1/3] lockdep: Move hlock_equal() to the respective ifdeffery Boqun Feng
2025-05-06 4:20 ` [PATCH 2/3] locking/lockdep: Prevent abuse of lockdep subclass Boqun Feng
@ 2025-05-06 4:20 ` Boqun Feng
2025-05-06 17:51 ` [tip: locking/core] locking/lockdep: Add number of dynamic keys " tip-bot2 for Waiman Long
2025-05-06 17:23 ` [GIT PULL] [PATCH 0/3] LOCKDEP changes for v6.16 Ingo Molnar
3 siblings, 1 reply; 8+ messages in thread
From: Boqun Feng @ 2025-05-06 4:20 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra
Cc: Ingo Molnar, Will Deacon, Boqun Feng, Waiman Long,
Nathan Chancellor, Nick Desaulniers, Bill Wendling, Justin Stitt,
linux-kernel, llvm
From: Waiman Long <longman@redhat.com>
There have been recent reports about running out of lockdep keys
(MAX_LOCKDEP_KEYS too low!). One possible reason is that too many
dynamic keys have been registered. A possible culprit is the
lockdep_register_key() call in qdisc_alloc() of net/sched/sch_generic.c.
Currently, there is no way to find out how many dynamic keys have been
registered. Add such a stat to the /proc/lockdep_stats to get better
clarity.
Signed-off-by: Waiman Long <longman@redhat.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://lore.kernel.org/r/20250425001155.775458-1-longman@redhat.com
---
kernel/locking/lockdep.c | 3 +++
kernel/locking/lockdep_internals.h | 1 +
kernel/locking/lockdep_proc.c | 2 ++
3 files changed, 6 insertions(+)
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 58883c8375d1..e7166ff64681 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -219,6 +219,7 @@ static DECLARE_BITMAP(list_entries_in_use, MAX_LOCKDEP_ENTRIES);
static struct hlist_head lock_keys_hash[KEYHASH_SIZE];
unsigned long nr_lock_classes;
unsigned long nr_zapped_classes;
+unsigned long nr_dynamic_keys;
unsigned long max_lock_class_idx;
struct lock_class lock_classes[MAX_LOCKDEP_KEYS];
DECLARE_BITMAP(lock_classes_in_use, MAX_LOCKDEP_KEYS);
@@ -1238,6 +1239,7 @@ void lockdep_register_key(struct lock_class_key *key)
goto out_unlock;
}
hlist_add_head_rcu(&key->hash_entry, hash_head);
+ nr_dynamic_keys++;
out_unlock:
graph_unlock();
restore_irqs:
@@ -6606,6 +6608,7 @@ void lockdep_unregister_key(struct lock_class_key *key)
pf = get_pending_free();
__lockdep_free_key_range(pf, key, 1);
need_callback = prepare_call_rcu_zapped(pf);
+ nr_dynamic_keys--;
}
lockdep_unlock();
raw_local_irq_restore(flags);
diff --git a/kernel/locking/lockdep_internals.h b/kernel/locking/lockdep_internals.h
index 20f9ef58d3d0..82156caf77d1 100644
--- a/kernel/locking/lockdep_internals.h
+++ b/kernel/locking/lockdep_internals.h
@@ -138,6 +138,7 @@ extern unsigned long nr_lock_classes;
extern unsigned long nr_zapped_classes;
extern unsigned long nr_zapped_lock_chains;
extern unsigned long nr_list_entries;
+extern unsigned long nr_dynamic_keys;
long lockdep_next_lockchain(long i);
unsigned long lock_chain_count(void);
extern unsigned long nr_stack_trace_entries;
diff --git a/kernel/locking/lockdep_proc.c b/kernel/locking/lockdep_proc.c
index 6db0f43fc4df..b52c07c4707c 100644
--- a/kernel/locking/lockdep_proc.c
+++ b/kernel/locking/lockdep_proc.c
@@ -286,6 +286,8 @@ static int lockdep_stats_show(struct seq_file *m, void *v)
#endif
seq_printf(m, " lock-classes: %11lu [max: %lu]\n",
nr_lock_classes, MAX_LOCKDEP_KEYS);
+ seq_printf(m, " dynamic-keys: %11lu\n",
+ nr_dynamic_keys);
seq_printf(m, " direct dependencies: %11lu [max: %lu]\n",
nr_list_entries, MAX_LOCKDEP_ENTRIES);
seq_printf(m, " indirect dependencies: %11lu\n",
--
2.39.5 (Apple Git-154)
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [GIT PULL] [PATCH 0/3] LOCKDEP changes for v6.16
2025-05-06 4:20 [GIT PULL] [PATCH 0/3] LOCKDEP changes for v6.16 Boqun Feng
` (2 preceding siblings ...)
2025-05-06 4:20 ` [PATCH 3/3] locking/lockdep: Add # of dynamic keys stat to /proc/lockdep_stats Boqun Feng
@ 2025-05-06 17:23 ` Ingo Molnar
3 siblings, 0 replies; 8+ messages in thread
From: Ingo Molnar @ 2025-05-06 17:23 UTC (permalink / raw)
To: Boqun Feng
Cc: Peter Zijlstra, Ingo Molnar, Will Deacon, Waiman Long,
Nathan Chancellor, Nick Desaulniers, Bill Wendling, Justin Stitt,
linux-kernel, llvm
* Boqun Feng <boqun.feng@gmail.com> wrote:
> Hi Ingo & Peter,
>
> Please pull the lockdep changes for v6.16 into tip. I'm sending the
> changes in patchset, but I also created a tag and the pull-request
> message in case you want to directly pull. Thanks!
>
> Regards,
> Boqun
>
> The following changes since commit 35e6b537af85d97e0aafd8f2829dfa884a22df20:
>
> lockdep: Remove disable_irq_lockdep() (2025-03-14 21:13:20 +0100)
>
> are available in the Git repository at:
>
> git@gitolite.kernel.org:pub/scm/linux/kernel/git/boqun/linux tags/lockdep-for-tip.2025.05.05
>
> for you to fetch changes up to b3eec4e26ada8a71c40147b45026d2345f3b6ae3:
>
> locking/lockdep: Add # of dynamic keys stat to /proc/lockdep_stats (2025-05-04 11:03:02 -0700)
>
> ----------------------------------------------------------------
> Lockdep changes for v6.16:
>
> - Move hlock_equal() only under CONFIG_LOCKDEP_SMALL=y
> - Prevent abuse of lockdep subclass in __lock_acquire()
> - Add # of dynamic keys stat to /proc/lockdep_stats
>
> ----------------------------------------------------------------
> Andy Shevchenko (1):
> lockdep: Move hlock_equal() to the respective ifdeffery
>
> Waiman Long (2):
> locking/lockdep: Prevent abuse of lockdep subclass
> locking/lockdep: Add # of dynamic keys stat to /proc/lockdep_stats
>
> kernel/locking/lockdep.c | 76 ++++++++++++++++++++------------------
> kernel/locking/lockdep_internals.h | 1 +
> kernel/locking/lockdep_proc.c | 2 +
> 3 files changed, 44 insertions(+), 35 deletions(-)
Thanks, applied to the locking tree (tip:locking/core).
Ingo
^ permalink raw reply [flat|nested] 8+ messages in thread
* [tip: locking/core] locking/lockdep: Add number of dynamic keys to /proc/lockdep_stats
2025-05-06 4:20 ` [PATCH 3/3] locking/lockdep: Add # of dynamic keys stat to /proc/lockdep_stats Boqun Feng
@ 2025-05-06 17:51 ` tip-bot2 for Waiman Long
0 siblings, 0 replies; 8+ messages in thread
From: tip-bot2 for Waiman Long @ 2025-05-06 17:51 UTC (permalink / raw)
To: linux-tip-commits
Cc: Waiman Long, Boqun Feng, Ingo Molnar, Bill Wendling, Justin Stitt,
Nathan Chancellor, Nick Desaulniers, Peter Zijlstra, Will Deacon,
llvm, x86, linux-kernel
The following commit has been merged into the locking/core branch of tip:
Commit-ID: cdb7d2d68cde6145a06a56c9d5d5d917297501c6
Gitweb: https://git.kernel.org/tip/cdb7d2d68cde6145a06a56c9d5d5d917297501c6
Author: Waiman Long <longman@redhat.com>
AuthorDate: Mon, 05 May 2025 21:20:49 -07:00
Committer: Ingo Molnar <mingo@kernel.org>
CommitterDate: Tue, 06 May 2025 18:34:43 +02:00
locking/lockdep: Add number of dynamic keys to /proc/lockdep_stats
There have been recent reports about running out of lockdep keys:
MAX_LOCKDEP_KEYS too low!
One possible reason is that too many dynamic keys have been registered.
A possible culprit is the lockdep_register_key() call in qdisc_alloc()
of net/sched/sch_generic.c.
Currently, there is no way to find out how many dynamic keys have been
registered. Add such a stat to the /proc/lockdep_stats to get better
clarity.
Signed-off-by: Waiman Long <longman@redhat.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Bill Wendling <morbo@google.com>
Cc: Justin Stitt <justinstitt@google.com>
Cc: Nathan Chancellor <nathan@kernel.org>
Cc: Nick Desaulniers <nick.desaulniers+lkml@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
Cc: llvm@lists.linux.dev
Link: https://lore.kernel.org/r/20250506042049.50060-4-boqun.feng@gmail.com
---
kernel/locking/lockdep.c | 3 +++
kernel/locking/lockdep_internals.h | 1 +
kernel/locking/lockdep_proc.c | 2 ++
3 files changed, 6 insertions(+)
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 050dbe9..dd2bbf7 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -219,6 +219,7 @@ static DECLARE_BITMAP(list_entries_in_use, MAX_LOCKDEP_ENTRIES);
static struct hlist_head lock_keys_hash[KEYHASH_SIZE];
unsigned long nr_lock_classes;
unsigned long nr_zapped_classes;
+unsigned long nr_dynamic_keys;
unsigned long max_lock_class_idx;
struct lock_class lock_classes[MAX_LOCKDEP_KEYS];
DECLARE_BITMAP(lock_classes_in_use, MAX_LOCKDEP_KEYS);
@@ -1238,6 +1239,7 @@ void lockdep_register_key(struct lock_class_key *key)
goto out_unlock;
}
hlist_add_head_rcu(&key->hash_entry, hash_head);
+ nr_dynamic_keys++;
out_unlock:
graph_unlock();
restore_irqs:
@@ -6609,6 +6611,7 @@ void lockdep_unregister_key(struct lock_class_key *key)
pf = get_pending_free();
__lockdep_free_key_range(pf, key, 1);
need_callback = prepare_call_rcu_zapped(pf);
+ nr_dynamic_keys--;
}
lockdep_unlock();
raw_local_irq_restore(flags);
diff --git a/kernel/locking/lockdep_internals.h b/kernel/locking/lockdep_internals.h
index 20f9ef5..82156ca 100644
--- a/kernel/locking/lockdep_internals.h
+++ b/kernel/locking/lockdep_internals.h
@@ -138,6 +138,7 @@ extern unsigned long nr_lock_classes;
extern unsigned long nr_zapped_classes;
extern unsigned long nr_zapped_lock_chains;
extern unsigned long nr_list_entries;
+extern unsigned long nr_dynamic_keys;
long lockdep_next_lockchain(long i);
unsigned long lock_chain_count(void);
extern unsigned long nr_stack_trace_entries;
diff --git a/kernel/locking/lockdep_proc.c b/kernel/locking/lockdep_proc.c
index 6db0f43..b52c07c 100644
--- a/kernel/locking/lockdep_proc.c
+++ b/kernel/locking/lockdep_proc.c
@@ -286,6 +286,8 @@ static int lockdep_stats_show(struct seq_file *m, void *v)
#endif
seq_printf(m, " lock-classes: %11lu [max: %lu]\n",
nr_lock_classes, MAX_LOCKDEP_KEYS);
+ seq_printf(m, " dynamic-keys: %11lu\n",
+ nr_dynamic_keys);
seq_printf(m, " direct dependencies: %11lu [max: %lu]\n",
nr_list_entries, MAX_LOCKDEP_ENTRIES);
seq_printf(m, " indirect dependencies: %11lu\n",
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [tip: locking/core] locking/lockdep: Prevent abuse of lockdep subclass
2025-05-06 4:20 ` [PATCH 2/3] locking/lockdep: Prevent abuse of lockdep subclass Boqun Feng
@ 2025-05-06 17:52 ` tip-bot2 for Waiman Long
0 siblings, 0 replies; 8+ messages in thread
From: tip-bot2 for Waiman Long @ 2025-05-06 17:52 UTC (permalink / raw)
To: linux-tip-commits
Cc: Waiman Long, Boqun Feng, Ingo Molnar, Bill Wendling, Justin Stitt,
Nathan Chancellor, Nick Desaulniers, Peter Zijlstra, Will Deacon,
llvm, x86, linux-kernel
The following commit has been merged into the locking/core branch of tip:
Commit-ID: 6a1a219f535a437eb12a06d8cef2518e58654beb
Gitweb: https://git.kernel.org/tip/6a1a219f535a437eb12a06d8cef2518e58654beb
Author: Waiman Long <longman@redhat.com>
AuthorDate: Mon, 05 May 2025 21:20:48 -07:00
Committer: Ingo Molnar <mingo@kernel.org>
CommitterDate: Tue, 06 May 2025 18:34:35 +02:00
locking/lockdep: Prevent abuse of lockdep subclass
To catch the code trying to use a subclass value >= MAX_LOCKDEP_SUBCLASSES (8),
add a DEBUG_LOCKS_WARN_ON() statement to notify the users that such a
large value is not allowed.
[ boqun: Reword the commit log with a more objective tone ]
Signed-off-by: Waiman Long <longman@redhat.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Bill Wendling <morbo@google.com>
Cc: Justin Stitt <justinstitt@google.com>
Cc: Nathan Chancellor <nathan@kernel.org>
Cc: Nick Desaulniers <nick.desaulniers+lkml@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
Cc: llvm@lists.linux.dev
Link: https://lore.kernel.org/r/20250506042049.50060-3-boqun.feng@gmail.com
---
kernel/locking/lockdep.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 546e928..050dbe9 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -5101,6 +5101,9 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass,
lockevent_inc(lockdep_nocheck);
}
+ if (DEBUG_LOCKS_WARN_ON(subclass >= MAX_LOCKDEP_SUBCLASSES))
+ return 0;
+
if (subclass < NR_LOCKDEP_CACHING_CLASSES)
class = lock->class_cache[subclass];
/*
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [tip: locking/core] locking/lockdep: Move hlock_equal() to the respective #ifdeffery
2025-05-06 4:20 ` [PATCH 1/3] lockdep: Move hlock_equal() to the respective ifdeffery Boqun Feng
@ 2025-05-06 17:52 ` tip-bot2 for Andy Shevchenko
0 siblings, 0 replies; 8+ messages in thread
From: tip-bot2 for Andy Shevchenko @ 2025-05-06 17:52 UTC (permalink / raw)
To: linux-tip-commits
Cc: Andy Shevchenko, Boqun Feng, Ingo Molnar, Bill Wendling,
Justin Stitt, Nathan Chancellor, Nick Desaulniers, Peter Zijlstra,
Waiman Long, Will Deacon, llvm, x86, linux-kernel
The following commit has been merged into the locking/core branch of tip:
Commit-ID: 96ca1830e1219eda431702eb9a74225e8fe3ccc0
Gitweb: https://git.kernel.org/tip/96ca1830e1219eda431702eb9a74225e8fe3ccc0
Author: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
AuthorDate: Mon, 05 May 2025 21:20:47 -07:00
Committer: Ingo Molnar <mingo@kernel.org>
CommitterDate: Tue, 06 May 2025 18:34:31 +02:00
locking/lockdep: Move hlock_equal() to the respective #ifdeffery
When hlock_equal() is unused, it prevents kernel builds with clang,
`make W=1` and CONFIG_WERROR=y, CONFIG_LOCKDEP=y and
CONFIG_LOCKDEP_SMALL=n:
lockdep.c:2005:20: error: unused function 'hlock_equal' [-Werror,-Wunused-function]
Fix this by moving the function to the respective existing ifdeffery
for its the only user.
See also:
6863f5643dd7 ("kbuild: allow Clang to find unused static inline functions for W=1 build")
Fixes: 68e305678583 ("lockdep: Adjust check_redundant() for recursive read change")
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Bill Wendling <morbo@google.com>
Cc: Justin Stitt <justinstitt@google.com>
Cc: Nathan Chancellor <nathan@kernel.org>
Cc: Nick Desaulniers <nick.desaulniers+lkml@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Will Deacon <will@kernel.org>
Cc: llvm@lists.linux.dev
Link: https://lore.kernel.org/r/20250506042049.50060-2-boqun.feng@gmail.com
---
kernel/locking/lockdep.c | 70 +++++++++++++++++++--------------------
1 file changed, 35 insertions(+), 35 deletions(-)
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 58d78a3..546e928 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -1977,41 +1977,6 @@ print_circular_bug_header(struct lock_list *entry, unsigned int depth,
}
/*
- * We are about to add A -> B into the dependency graph, and in __bfs() a
- * strong dependency path A -> .. -> B is found: hlock_class equals
- * entry->class.
- *
- * If A -> .. -> B can replace A -> B in any __bfs() search (means the former
- * is _stronger_ than or equal to the latter), we consider A -> B as redundant.
- * For example if A -> .. -> B is -(EN)-> (i.e. A -(E*)-> .. -(*N)-> B), and A
- * -> B is -(ER)-> or -(EN)->, then we don't need to add A -> B into the
- * dependency graph, as any strong path ..-> A -> B ->.. we can get with
- * having dependency A -> B, we could already get a equivalent path ..-> A ->
- * .. -> B -> .. with A -> .. -> B. Therefore A -> B is redundant.
- *
- * We need to make sure both the start and the end of A -> .. -> B is not
- * weaker than A -> B. For the start part, please see the comment in
- * check_redundant(). For the end part, we need:
- *
- * Either
- *
- * a) A -> B is -(*R)-> (everything is not weaker than that)
- *
- * or
- *
- * b) A -> .. -> B is -(*N)-> (nothing is stronger than this)
- *
- */
-static inline bool hlock_equal(struct lock_list *entry, void *data)
-{
- struct held_lock *hlock = (struct held_lock *)data;
-
- return hlock_class(hlock) == entry->class && /* Found A -> .. -> B */
- (hlock->read == 2 || /* A -> B is -(*R)-> */
- !entry->only_xr); /* A -> .. -> B is -(*N)-> */
-}
-
-/*
* We are about to add B -> A into the dependency graph, and in __bfs() a
* strong dependency path A -> .. -> B is found: hlock_class equals
* entry->class.
@@ -2916,6 +2881,41 @@ static inline bool usage_skip(struct lock_list *entry, void *mask)
#ifdef CONFIG_LOCKDEP_SMALL
/*
+ * We are about to add A -> B into the dependency graph, and in __bfs() a
+ * strong dependency path A -> .. -> B is found: hlock_class equals
+ * entry->class.
+ *
+ * If A -> .. -> B can replace A -> B in any __bfs() search (means the former
+ * is _stronger_ than or equal to the latter), we consider A -> B as redundant.
+ * For example if A -> .. -> B is -(EN)-> (i.e. A -(E*)-> .. -(*N)-> B), and A
+ * -> B is -(ER)-> or -(EN)->, then we don't need to add A -> B into the
+ * dependency graph, as any strong path ..-> A -> B ->.. we can get with
+ * having dependency A -> B, we could already get a equivalent path ..-> A ->
+ * .. -> B -> .. with A -> .. -> B. Therefore A -> B is redundant.
+ *
+ * We need to make sure both the start and the end of A -> .. -> B is not
+ * weaker than A -> B. For the start part, please see the comment in
+ * check_redundant(). For the end part, we need:
+ *
+ * Either
+ *
+ * a) A -> B is -(*R)-> (everything is not weaker than that)
+ *
+ * or
+ *
+ * b) A -> .. -> B is -(*N)-> (nothing is stronger than this)
+ *
+ */
+static inline bool hlock_equal(struct lock_list *entry, void *data)
+{
+ struct held_lock *hlock = (struct held_lock *)data;
+
+ return hlock_class(hlock) == entry->class && /* Found A -> .. -> B */
+ (hlock->read == 2 || /* A -> B is -(*R)-> */
+ !entry->only_xr); /* A -> .. -> B is -(*N)-> */
+}
+
+/*
* Check that the dependency graph starting at <src> can lead to
* <target> or not. If it can, <src> -> <target> dependency is already
* in the graph.
^ permalink raw reply related [flat|nested] 8+ messages in thread
end of thread, other threads:[~2025-05-06 17:52 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-05-06 4:20 [GIT PULL] [PATCH 0/3] LOCKDEP changes for v6.16 Boqun Feng
2025-05-06 4:20 ` [PATCH 1/3] lockdep: Move hlock_equal() to the respective ifdeffery Boqun Feng
2025-05-06 17:52 ` [tip: locking/core] locking/lockdep: Move hlock_equal() to the respective #ifdeffery tip-bot2 for Andy Shevchenko
2025-05-06 4:20 ` [PATCH 2/3] locking/lockdep: Prevent abuse of lockdep subclass Boqun Feng
2025-05-06 17:52 ` [tip: locking/core] " tip-bot2 for Waiman Long
2025-05-06 4:20 ` [PATCH 3/3] locking/lockdep: Add # of dynamic keys stat to /proc/lockdep_stats Boqun Feng
2025-05-06 17:51 ` [tip: locking/core] locking/lockdep: Add number of dynamic keys " tip-bot2 for Waiman Long
2025-05-06 17:23 ` [GIT PULL] [PATCH 0/3] LOCKDEP changes for v6.16 Ingo Molnar
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox