From: Yuyang Du <duyuyang@gmail.com>
To: peterz@infradead.org, will.deacon@arm.com, mingo@kernel.org
Cc: bvanassche@acm.org, ming.lei@redhat.com, frederic@kernel.org,
tglx@linutronix.de, linux-kernel@vger.kernel.org,
longman@redhat.com, paulmck@linux.vnet.ibm.com,
boqun.feng@gmail.com, Yuyang Du <duyuyang@gmail.com>
Subject: [PATCH v3 12/30] locking/lockdep: Treat every lock dependency as in a new lock chain
Date: Fri, 28 Jun 2019 17:15:10 +0800 [thread overview]
Message-ID: <20190628091528.17059-13-duyuyang@gmail.com> (raw)
In-Reply-To: <20190628091528.17059-1-duyuyang@gmail.com>
For a lock chain that has a lock on top of a trylock or a multiple of
consecutive trylocks, if they are in the same context, we check each
prev trylock -> next and the first prev non-trylock -> next lock
dependency, illustrated as:
(The top is the latest lock.)
Lock1
Trylock2
Trylock3
Lock4
...
If the prev lock is not the direct previous lock to next (e.g., Trylock3
and Lock4), this dependency may not have a lock chain associated with
it. IOW, we may never make a lock chain, but the chain is actually
checked in such circumstances. This patch fixes this by treating each
such depdnency as if it is from a new lock chain. If the chain already
exists, then this is a chain hit and the check is actually not needed.
After this, it is guarantteed that each dependency has at least a lock
chain associated with it.
Signed-off-by: Yuyang Du <duyuyang@gmail.com>
---
kernel/locking/lockdep.c | 223 +++++++++++++++++++++++------------------------
1 file changed, 108 insertions(+), 115 deletions(-)
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 5d19dc6..6f457ef 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -1207,6 +1207,11 @@ static bool is_dynamic_key(const struct lock_class_key *key)
}
#ifdef CONFIG_PROVE_LOCKING
+static inline struct
+lock_chain *lookup_chain_cache_add(struct task_struct *curr,
+ struct held_lock *hlock,
+ u64 chain_key, int depth);
+
/*
* Allocate a lockdep entry. (assumes the graph_lock held, returns
* with NULL on failure)
@@ -2432,87 +2437,6 @@ static inline void inc_chains(void)
return 2;
}
-/*
- * Add the dependency to all directly-previous locks that are 'relevant'.
- * The ones that are relevant are (in increasing distance from curr):
- * all consecutive trylock entries and the final non-trylock entry - or
- * the end of this context's lock-chain - whichever comes first.
- */
-static int
-check_prevs_add(struct task_struct *curr, struct held_lock *next,
- struct lock_chain *chain)
-{
- struct lock_trace trace = { .nr_entries = 0 };
- int depth = curr->lockdep_depth;
- struct held_lock *hlock;
-
- /*
- * Debugging checks.
- *
- * Depth must not be zero for a non-head lock:
- */
- if (!depth)
- goto out_bug;
- /*
- * At least two relevant locks must exist for this
- * to be a head:
- */
- if (curr->held_locks[depth].irq_context !=
- curr->held_locks[depth-1].irq_context)
- goto out_bug;
-
- for (;;) {
- int distance = curr->lockdep_depth - depth + 1;
- hlock = curr->held_locks + depth - 1;
-
- /*
- * Only non-recursive-read entries get new dependencies
- * added:
- */
- if (hlock->read != 2 && hlock->check) {
- int ret = check_prev_add(curr, hlock, next, distance,
- &trace, chain);
- if (!ret)
- return 0;
-
- /*
- * Stop after the first non-trylock entry,
- * as non-trylock entries have added their
- * own direct dependencies already, so this
- * lock is connected to them indirectly:
- */
- if (!hlock->trylock)
- break;
- }
-
- depth--;
- /*
- * End of lock-stack?
- */
- if (!depth)
- break;
- /*
- * Stop the search if we cross into another context:
- */
- if (curr->held_locks[depth].irq_context !=
- curr->held_locks[depth-1].irq_context)
- break;
- }
- return 1;
-out_bug:
- if (!debug_locks_off_graph_unlock())
- return 0;
-
- /*
- * Clearly we all shouldn't be here, but since we made it we
- * can reliable say we messed up our state. See the above two
- * gotos for reasons why we could possibly end up here.
- */
- WARN_ON(1);
-
- return 0;
-}
-
struct lock_chain lock_chains[MAX_LOCKDEP_CHAINS];
static DECLARE_BITMAP(lock_chains_in_use, MAX_LOCKDEP_CHAINS);
int nr_chain_hlocks;
@@ -2810,66 +2734,135 @@ static inline struct lock_chain *lookup_chain_cache(u64 chain_key)
return add_chain_cache(curr, hlock, chain_key, depth);
}
-static int validate_chain(struct task_struct *curr, struct held_lock *hlock,
+/*
+ * Check whether last held lock:
+ *
+ * - is irq-safe, if this lock is irq-unsafe
+ * - is softirq-safe, if this lock is hardirq-unsafe
+ *
+ * And check whether the new lock's dependency graph could lead back to the
+ * previous lock:
+ *
+ * - within the current held-lock stack or
+ * - across our accumulated lock dependency graph
+ *
+ * any of these scenarios could lead to a deadlock.
+ */
+static int validate_chain(struct task_struct *curr, struct held_lock *next,
u64 chain_key)
{
struct lock_chain *chain;
+ struct lock_trace trace = { .nr_entries = 0 };
+ struct held_lock *hlock;
+ int depth = curr->lockdep_depth;
+
/*
* Trylock needs to maintain the stack of held locks, but it
- * does not add new dependencies, because trylock can be done
- * in any order.
- *
+ * does not add new dependencies unless it is taken, because
+ * attempting to acquire a trylock does not block.
+ */
+ if (next->trylock || !next->check)
+ return 1;
+
+ /*
+ * Add the dependency to all previous locks that are 'relevant'. The
+ * ones that are relevant are (in increasing distance from next lock
+ * to acquire): all consecutive trylock entries and the final
+ * non-trylock entry - or the end of this context's lock-chain
+ * - whichever comes first.
+ */
+chain_again:
+ hlock = curr->held_locks + depth - 1;
+
+ /*
* We look up the chain_key and do the O(N^2) check and update of
- * the dependencies only if this is a new dependency chain.
- * (If lookup_chain_cache_add() return with 1 it acquires
- * graph_lock for us)
+ * the dependencies only if this is a new dependency chain. (If
+ * lookup_chain_cache_add() return with 1 it acquires graph_lock for
+ * us.)
*/
- if (!hlock->trylock && hlock->check &&
- (chain = lookup_chain_cache_add(curr, hlock, chain_key,
- curr->lockdep_depth))) {
- /*
- * Check whether last held lock:
- *
- * - is irq-safe, if this lock is irq-unsafe
- * - is softirq-safe, if this lock is hardirq-unsafe
- *
- * And check whether the new lock's dependency graph
- * could lead back to the previous lock:
- *
- * - within the current held-lock stack
- * - across our accumulated lock dependency records
- *
- * any of these scenarios could lead to a deadlock.
- */
+ chain = lookup_chain_cache_add(curr, next, chain_key, depth);
+ if (depth == curr->lockdep_depth) {
+ int ret;
+
+ if (!chain)
+ return 1;
/*
* The simple case: does the current hold the same lock
* already?
*/
- int ret = check_deadlock_current(curr, hlock);
+ ret = check_deadlock_current(curr, next);
if (!ret)
return 0;
/*
- * Add dependency only if this lock is not the head
- * of the chain, and if it's not a secondary read-lock:
+ * Add dependency only if this lock is not the head of the
+ * chain, and if it's not a second recursive-read lock. If
+ * not, there is no need to check further.
*/
- if (chain->depth > 1 && ret != 2) {
- if (!check_prevs_add(curr, hlock, chain))
+ if (!(chain->depth > 1 && ret != 2))
+ goto out_unlock;
+ }
+
+ /*
+ * Only non-recursive-read entries get new dependencies
+ * added:
+ */
+ if (chain) {
+ if (hlock->read != 2 && hlock->check) {
+ int distance = curr->lockdep_depth - depth + 1;
+
+ if (!check_prev_add(curr, hlock, next, distance,
+ &trace, chain))
return 0;
}
graph_unlock();
- } else {
- /* after lookup_chain_cache_add(): */
- if (unlikely(!debug_locks))
- return 0;
}
+ /*
+ * Stop after the first non-trylock entry, as non-trylock entries
+ * have added their own direct dependencies already, so this lock is
+ * connected to them indirectly:
+ */
+ if (!hlock->trylock)
+ goto out;
+
+ depth--;
+ /*
+ * End of lock-stack?
+ */
+ if (!depth)
+ goto out;
+ /*
+ * Stop the search if we cross into another context:
+ */
+ if (curr->held_locks[depth].irq_context !=
+ curr->held_locks[depth-1].irq_context)
+ goto out;
+
+ /*
+ * This is another direct dependency with a further previous lock
+ * that is separated by a trylock. We compose a lock chain out of
+ * this, then calculate the chain key, and look it up in the
+ * lock_chains. If it exists the check is actually not needed.
+ */
+ chain_key = iterate_chain_key(hlock->prev_chain_key,
+ hlock_class(next) - lock_classes);
+
+ goto chain_again;
+
+out_unlock:
+ graph_unlock();
+out:
+ /* after lookup_chain_cache_add(): */
+ if (unlikely(!debug_locks))
+ return 0;
+
return 1;
}
#else
static inline int validate_chain(struct task_struct *curr,
- struct held_lock *hlock,
+ struct held_lock *next,
u64 chain_key)
{
return 1;
--
1.8.3.1
next prev parent reply other threads:[~2019-06-28 9:16 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-06-28 9:14 [PATCH v3 00/30] Support recursive-read lock deadlock detection Yuyang Du
2019-06-28 9:14 ` [PATCH v3 01/30] locking/lockdep: Rename deadlock check functions Yuyang Du
2019-06-28 9:15 ` [PATCH v3 02/30] locking/lockdep: Change return type of add_chain_cache() Yuyang Du
2019-06-28 9:15 ` [PATCH v3 03/30] locking/lockdep: Change return type of lookup_chain_cache_add() Yuyang Du
2019-06-28 9:15 ` [PATCH v3 04/30] locking/lockdep: Pass lock chain from validate_chain() to check_prev_add() Yuyang Du
2019-06-28 9:15 ` [PATCH v3 05/30] locking/lockdep: Add lock chain list_head field in struct lock_list and lock_chain Yuyang Du
2019-06-28 9:15 ` [PATCH v3 06/30] locking/lockdep: Update comments in struct lock_list and held_lock Yuyang Du
2019-06-28 9:15 ` [PATCH v3 07/30] locking/lockdep: Remove indirect dependency redundancy check Yuyang Du
2019-06-28 9:15 ` [PATCH v3 08/30] locking/lockdep: Skip checks if direct dependency is already present Yuyang Du
2019-06-28 9:15 ` [PATCH v3 09/30] locking/lockdep: Remove chain_head argument in validate_chain() Yuyang Du
2019-06-28 9:15 ` [PATCH v3 10/30] locking/lockdep: Remove useless lock type assignment Yuyang Du
2019-06-28 9:15 ` [PATCH v3 11/30] locking/lockdep: Specify the depth of current lock stack in lookup_chain_cache_add() Yuyang Du
2019-06-28 9:15 ` Yuyang Du [this message]
2019-06-28 9:15 ` [PATCH v3 13/30] locking/lockdep: Combine lock_lists in struct lock_class into an array Yuyang Du
2019-06-28 9:15 ` [PATCH v3 14/30] locking/lockdep: Consolidate forward and backward lock_lists into one Yuyang Du
2019-06-28 9:15 ` [PATCH v3 15/30] locking/lockdep: Add lock chains to direct lock dependency graph Yuyang Du
2019-06-28 9:15 ` [PATCH v3 16/30] locking/lockdep: Use lock type enum to explicitly specify read or write locks Yuyang Du
2019-06-28 9:15 ` [PATCH v3 17/30] locking/lockdep: Add read-write type for a lock dependency Yuyang Du
2019-07-10 5:18 ` Boqun Feng
2019-07-11 5:02 ` Yuyang Du
2019-06-28 9:15 ` [PATCH v3 18/30] locking/lockdep: Add helper functions to operate on the searched path Yuyang Du
2019-06-28 9:15 ` [PATCH v3 19/30] locking/lockdep: Update direct dependency's read-write type if it exists Yuyang Du
2019-06-28 9:15 ` [PATCH v3 20/30] locking/lockdep: Introduce chain_hlocks_type for held lock's read-write type Yuyang Du
2019-06-28 9:15 ` [PATCH v3 21/30] locking/lockdep: Hash held lock's read-write type into chain key Yuyang Du
2019-06-28 9:15 ` [PATCH v3 22/30] locking/lockdep: Adjust BFS algorithm to support multiple matches Yuyang Du
2019-06-28 9:15 ` [PATCH v3 23/30] locking/lockdep: Define the two task model for lockdep checks formally Yuyang Du
2019-06-28 9:15 ` [PATCH v3 24/30] locking/lockdep: Introduce mark_lock_unaccessed() Yuyang Du
2019-06-28 9:15 ` [PATCH v3 25/30] locking/lockdep: Add nest lock type Yuyang Du
2019-06-28 9:15 ` [PATCH v3 26/30] locking/lockdep: Add lock exclusiveness table Yuyang Du
2019-06-28 9:15 ` [PATCH v3 27/30] locking/lockdep: Support read-write lock's deadlock detection Yuyang Du
2019-06-28 9:15 ` [PATCH v3 28/30] locking/lockdep: Adjust selftest case for recursive read lock Yuyang Du
2019-06-28 9:15 ` [PATCH v3 29/30] locking/lockdep: Add more lockdep selftest cases Yuyang Du
2019-06-28 9:15 ` [PATCH v3 30/30] locking/lockdep: Remove irq-safe to irq-unsafe read check Yuyang Du
2019-07-10 5:30 ` Boqun Feng
2019-07-10 6:30 ` Yuyang Du
2019-07-10 1:54 ` [PATCH v3 00/30] Support recursive-read lock deadlock detection Yuyang Du
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190628091528.17059-13-duyuyang@gmail.com \
--to=duyuyang@gmail.com \
--cc=boqun.feng@gmail.com \
--cc=bvanassche@acm.org \
--cc=frederic@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=longman@redhat.com \
--cc=ming.lei@redhat.com \
--cc=mingo@kernel.org \
--cc=paulmck@linux.vnet.ibm.com \
--cc=peterz@infradead.org \
--cc=tglx@linutronix.de \
--cc=will.deacon@arm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox