From: Oleg Nesterov <oleg@tv-sign.ru>
To: Ingo Molnar <mingo@redhat.com>
Cc: Johannes Berg <johannes@sipsolutions.net>,
Arjan van de Ven <arjan@linux.intel.com>,
Linux Kernel list <linux-kernel@vger.kernel.org>,
linux-wireless <linux-wireless@vger.kernel.org>,
Peter Zijlstra <a.p.zijlstra@chello.nl>,
mingo@elte.hu, Thomas Sattler <tsattler@gmx.de>
Subject: Re: [RFC/PATCH] debug workqueue deadlocks with lockdep
Date: Sat, 30 Jun 2007 15:46:58 +0400 [thread overview]
Message-ID: <20070630114658.GA344@tv-sign.ru> (raw)
In-Reply-To: <1183190728.7932.43.camel@earth4>
On 06/30, Ingo Molnar wrote:
>
> On Thu, 2007-06-28 at 19:33 +0200, Johannes Berg wrote:
> > No, that's not right either, but Arjan just helped me a bit with how
> > lockdep works and I think I have the right idea now. Ignore this for
> > now, I'll send a new patch in a few days.
>
> ok. But in general, this is a very nice idea!
>
> i've Cc:-ed Oleg. Oleg, what do you think? I think we should keep all
> the workqueue APIs specified in a form that makes them lockdep coverable
> like Johannes did. This debug mechanism could have helped with the
> recent DVB lockup that Thomas Sattler reported.
I think this idea is great!
Johannes, could you change wait_on_work() as well? Most users of
flush_workqueue() should be converted to use it.
> @@ -342,6 +351,9 @@ static int flush_cpu_workqueue(struct cp
> } else {
> struct wq_barrier barr;
>
> + lock_acquire(&cwq->wq->lockdep_map, 0, 0, 0, 2, _THIS_IP_);
> + lock_release(&cwq->wq->lockdep_map, 0, _THIS_IP_);
> +
> active = 0;
> spin_lock_irq(&cwq->lock);
I am not sure why you skip "if (cwq->thread == current)" case, it can
deadlock in the same way.
But, perhaps we should not change flush_cpu_workqueue(). If we detect the
deadlock, we will have num_online_cpus() reports, yes?
And,
> if (!list_empty(&cwq->worklist) || cwq->current_work != NULL) {
> @@ -376,6 +388,8 @@ void fastcall flush_workqueue(struct wor
> int cpu;
>
> might_sleep();
> + lock_acquire(&wq->lockdep_map, 0, 0, 0, 2, _THIS_IP_);
> + lock_release(&wq->lockdep_map, 0, _THIS_IP_);
one of the 2 callers was already modified. Perhaps it is better to add
lock_acquire() into the second caller, cleanup_workqueue_thread(), but
skip flush_cpu_workqueue() ?
Oleg.
next prev parent reply other threads:[~2007-06-30 11:46 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-06-27 18:40 [RFC/PATCH] debug workqueue deadlocks with lockdep Johannes Berg
2007-06-28 16:33 ` Johannes Berg
2007-06-28 17:33 ` Johannes Berg
2007-06-30 8:05 ` Ingo Molnar
2007-06-30 11:46 ` Oleg Nesterov [this message]
2007-07-02 8:37 ` Johannes Berg
2007-07-02 13:03 ` Johannes Berg
2007-07-03 17:31 ` Oleg Nesterov
2007-07-03 19:42 ` Ingo Molnar
2007-07-04 11:49 ` Johannes Berg
2007-07-04 12:21 ` Ingo Molnar
2007-07-04 13:59 ` Johannes Berg
2007-07-04 12:25 ` Ingo Molnar
2007-07-04 12:52 ` Oleg Nesterov
2007-07-04 13:57 ` Johannes Berg
2007-07-05 8:43 ` Ingo Molnar
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20070630114658.GA344@tv-sign.ru \
--to=oleg@tv-sign.ru \
--cc=a.p.zijlstra@chello.nl \
--cc=arjan@linux.intel.com \
--cc=johannes@sipsolutions.net \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-wireless@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=mingo@redhat.com \
--cc=tsattler@gmx.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).