From: Yong Zhang <yong.zhang0@gmail.com>
To: Stephen Boyd <sboyd@codeaurora.org>
Cc: linux-kernel@vger.kernel.org, Tejun Heo <tj@kernel.org>,
netdev@vger.kernel.org, Ben Dooks <ben-linux@fluff.org>
Subject: Re: [PATCH 1/2] workqueue: Catch more locking problems with flush_work()
Date: Fri, 20 Apr 2012 14:01:01 +0800 [thread overview]
Message-ID: <20120420060101.GA16563@zhy> (raw)
In-Reply-To: <20120420052633.GA16219@zhy>
On Fri, Apr 20, 2012 at 01:26:33PM +0800, Yong Zhang wrote:
> On Thu, Apr 19, 2012 at 11:36:32AM -0700, Stephen Boyd wrote:
> > Does looking at the second patch help? Basically schedule_work() can run
> > the callback right between the time the mutex is acquired and
> > flush_work() is called:
> >
> > CPU0 CPU1
> >
> > <irq>
> > schedule_work() mutex_lock(&mutex)
> > <irq return>
> > my_work() flush_work()
> > mutex_lock(&mutex)
> > <deadlock>
>
> Get you point. It is a problem. But your patch could introduece false
> positive since when flush_work() is called that very work may finish
> running already.
>
> So I think we need the lock_map_acquire()/lock_map_release() only when
> the work is under processing, no?
But start_flush_work() has tried take care of this issue except it
doesn't add work->lockdep_map into the chain.
So does below patch help?
Thanks,
Yong
---
From: Yong Zhang <yong.zhang@windriver.com>
Date: Fri, 20 Apr 2012 13:44:16 +0800
Subject: [PATCH] workqueue:lockdep: make flush_work notice deadlock
Connet the lock chain by aquiring work->lockdep_map when
the tobe-flush work is running.
Signed-off-by: Yong Zhang <yong.zhang0@gmail.com>
Reported-by: Stephen Boyd <sboyd@codeaurora.org>
---
kernel/workqueue.c | 2 ++
1 files changed, 2 insertions(+), 0 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index bc867e8..c096b05 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -2461,6 +2461,8 @@ static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr,
lock_map_acquire(&cwq->wq->lockdep_map);
else
lock_map_acquire_read(&cwq->wq->lockdep_map);
+ lock_map_acquire(&work->lockdep_map);
+ lock_map_release(&work->lockdep_map);
lock_map_release(&cwq->wq->lockdep_map);
return true;
--
1.7.5.4
next prev parent reply other threads:[~2012-04-20 6:01 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-04-19 3:25 [PATCH 1/2] workqueue: Catch more locking problems with flush_work() Stephen Boyd
2012-04-19 3:25 ` [PATCH 2/2] ks8851: Fix mutex deadlock in ks8851_net_stop() Stephen Boyd
2012-04-21 19:34 ` David Miller
2012-04-19 8:10 ` [PATCH 1/2] workqueue: Catch more locking problems with flush_work() Yong Zhang
2012-04-19 18:36 ` Stephen Boyd
2012-04-20 5:26 ` Yong Zhang
2012-04-20 6:01 ` Yong Zhang [this message]
2012-04-20 6:26 ` Stephen Boyd
2012-04-20 7:18 ` Yong Zhang
2012-04-20 8:18 ` Stephen Boyd
2012-04-20 8:32 ` Yong Zhang
2012-04-21 0:32 ` Yong Zhang
2012-04-19 15:28 ` Tejun Heo
2012-04-19 18:10 ` Stephen Boyd
2012-04-20 17:35 ` Tejun Heo
2012-04-20 23:15 ` Stephen Boyd
2012-04-21 0:28 ` [PATCHv2] " Stephen Boyd
2012-04-21 0:34 ` Yong Zhang
2012-04-23 18:07 ` Tejun Heo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20120420060101.GA16563@zhy \
--to=yong.zhang0@gmail.com \
--cc=ben-linux@fluff.org \
--cc=linux-kernel@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=sboyd@codeaurora.org \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).