* [BUG] e1000: possible deadlock scenario caught by lockdep
@ 2011-11-18 1:27 Steven Rostedt
2011-11-18 16:57 ` Jesse Brandeburg
0 siblings, 1 reply; 4+ messages in thread
From: Steven Rostedt @ 2011-11-18 1:27 UTC (permalink / raw)
To: Jesse Brandeburg
Cc: LKML, Thomas Gleixner, Tushar Dave, Aaron Brown, Jeff Kirsher
I hit the following lockdep splat:
======================================================
[ INFO: possible circular locking dependency detected ]
3.2.0-rc2-test+ #14
-------------------------------------------------------
reboot/2316 is trying to acquire lock:
((&(&adapter->watchdog_task)->work)){+.+...}, at: [<ffffffff81069553>] wait_on_work+0x0/0xac
but task is already holding lock:
(&adapter->mutex){+.+...}, at: [<ffffffff81359b1d>] __e1000_shutdown+0x56/0x1f5
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (&adapter->mutex){+.+...}:
[<ffffffff8108261a>] lock_acquire+0x103/0x158
[<ffffffff8150bcf3>] __mutex_lock_common+0x6a/0x441
[<ffffffff8150c13d>] mutex_lock_nested+0x1b/0x1d
[<ffffffff81359288>] e1000_watchdog+0x56/0x4a4
[<ffffffff8106a1b0>] process_one_work+0x1ef/0x3e0
[<ffffffff8106b4e0>] worker_thread+0xda/0x15e
[<ffffffff8106f00e>] kthread+0x9f/0xa7
[<ffffffff81514e84>] kernel_thread_helper+0x4/0x10
-> #0 ((&(&adapter->watchdog_task)->work)){+.+...}:
[<ffffffff81081e4a>] __lock_acquire+0xa29/0xd06
[<ffffffff8108261a>] lock_acquire+0x103/0x158
[<ffffffff81069590>] wait_on_work+0x3d/0xac
[<ffffffff8106a616>] __cancel_work_timer+0xb9/0xff
[<ffffffff8106a66e>] cancel_delayed_work_sync+0x12/0x14
[<ffffffff81355c8f>] e1000_down_and_stop+0x2e/0x4a
[<ffffffff813581ed>] e1000_down+0x116/0x176
[<ffffffff81359b4a>] __e1000_shutdown+0x83/0x1f5
[<ffffffff81359cd6>] e1000_shutdown+0x1a/0x43
[<ffffffff8126fdad>] pci_device_shutdown+0x29/0x3d
[<ffffffff8130c601>] device_shutdown+0xbe/0xf9
[<ffffffff81065b17>] kernel_restart_prepare+0x31/0x38
[<ffffffff81065b32>] kernel_restart+0x14/0x51
[<ffffffff81065cd8>] sys_reboot+0x157/0x1b0
[<ffffffff81513882>] system_call_fastpath+0x16/0x1b
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(&adapter->mutex);
lock((&(&adapter->watchdog_task)->work));
lock(&adapter->mutex);
lock((&(&adapter->watchdog_task)->work));
*** DEADLOCK ***
2 locks held by reboot/2316:
#0: (reboot_mutex){+.+.+.}, at: [<ffffffff81065c20>] sys_reboot+0x9f/0x1b0
#1: (&adapter->mutex){+.+...}, at: [<ffffffff81359b1d>] __e1000_shutdown+0x56/0x1f5
stack backtrace:
Pid: 2316, comm: reboot Not tainted 3.2.0-rc2-test+ #14
Call Trace:
[<ffffffff81503eb2>] print_circular_bug+0x1f8/0x209
[<ffffffff81081e4a>] __lock_acquire+0xa29/0xd06
[<ffffffff81069553>] ? wait_on_cpu_work+0x94/0x94
[<ffffffff8108261a>] lock_acquire+0x103/0x158
[<ffffffff81069553>] ? wait_on_cpu_work+0x94/0x94
[<ffffffff810c7caf>] ? trace_preempt_on+0x2a/0x2f
[<ffffffff81069590>] wait_on_work+0x3d/0xac
[<ffffffff81069553>] ? wait_on_cpu_work+0x94/0x94
[<ffffffff8106a616>] __cancel_work_timer+0xb9/0xff
[<ffffffff8106a66e>] cancel_delayed_work_sync+0x12/0x14
[<ffffffff81355c8f>] e1000_down_and_stop+0x2e/0x4a
[<ffffffff813581ed>] e1000_down+0x116/0x176
[<ffffffff81359b4a>] __e1000_shutdown+0x83/0x1f5
[<ffffffff8150d51c>] ? _raw_spin_unlock+0x33/0x56
[<ffffffff8130c583>] ? device_shutdown+0x40/0xf9
[<ffffffff81359cd6>] e1000_shutdown+0x1a/0x43
[<ffffffff81510757>] ? sub_preempt_count+0xa1/0xb4
[<ffffffff8126fdad>] pci_device_shutdown+0x29/0x3d
[<ffffffff8130c601>] device_shutdown+0xbe/0xf9
[<ffffffff81065b17>] kernel_restart_prepare+0x31/0x38
[<ffffffff81065b32>] kernel_restart+0x14/0x51
[<ffffffff81065cd8>] sys_reboot+0x157/0x1b0
[<ffffffff81072ccb>] ? hrtimer_cancel+0x17/0x24
[<ffffffff8150c304>] ? do_nanosleep+0x74/0xac
[<ffffffff8125c72d>] ? trace_hardirqs_off_thunk+0x3a/0x3c
[<ffffffff8150e066>] ? error_sti+0x5/0x6
[<ffffffff810c7c80>] ? time_hardirqs_off+0x2a/0x2f
[<ffffffff8125c6ee>] ? trace_hardirqs_on_thunk+0x3a/0x3f
[<ffffffff8150db5d>] ? retint_swapgs+0x13/0x1b
[<ffffffff8150db5d>] ? retint_swapgs+0x13/0x1b
[<ffffffff81082a78>] ? trace_hardirqs_on_caller+0x12d/0x164
[<ffffffff810a74ce>] ? audit_syscall_entry+0x11c/0x148
[<ffffffff8125c6ee>] ? trace_hardirqs_on_thunk+0x3a/0x3f
[<ffffffff81513882>] system_call_fastpath+0x16/0x1b
The issue comes from two recent commits:
commit a4010afef585b7142eb605e3a6e4210c0e1b2957
Author: Jesse Brandeburg <jesse.brandeburg@intel.com>
Date: Wed Oct 5 07:24:41 2011 +0000
e1000: convert hardware management from timers to threads
and
commit 0ef4eedc2e98edd51cd106e1f6a27178622b7e57
Author: Jesse Brandeburg <jesse.brandeburg@intel.com>
Date: Wed Oct 5 07:24:51 2011 +0000
e1000: convert to private mutex from rtnl
What we have is on __e1000_shutdown():
mutex_lock(&adapter->mutex);
if (netif_running(netdev)) {
WARN_ON(test_bit(__E1000_RESETTING, &adapter->flags));
e1000_down(adapter);
}
but e1000_down() calls: e1000_down_and_stop():
static void e1000_down_and_stop(struct e1000_adapter *adapter)
{
set_bit(__E1000_DOWN, &adapter->flags);
cancel_work_sync(&adapter->reset_task);
cancel_delayed_work_sync(&adapter->watchdog_task);
cancel_delayed_work_sync(&adapter->phy_info_task);
cancel_delayed_work_sync(&adapter->fifo_stall_task);
}
Here you see that we are calling cancel_delayed_work_sync(&adapter->watchdog_task);
The problem is that adapter->watchdog_task grabs the mutex &adapter->mutex.
If the work has started and it blocked on that mutex, the
cancel_delayed_work_sync() will block indefinitely and we have a
deadlock.
Not sure what's the best way around this. Can we call e1000_down()
without grabbing the adapter->mutex?
-- Steve
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [BUG] e1000: possible deadlock scenario caught by lockdep
2011-11-18 1:27 [BUG] e1000: possible deadlock scenario caught by lockdep Steven Rostedt
@ 2011-11-18 16:57 ` Jesse Brandeburg
2011-11-18 23:17 ` Jesse Brandeburg
0 siblings, 1 reply; 4+ messages in thread
From: Jesse Brandeburg @ 2011-11-18 16:57 UTC (permalink / raw)
To: Steven Rostedt
Cc: LKML, Thomas Gleixner, Dave, Tushar N, Brown, Aaron F,
Kirsher, Jeffrey T, netdev, e1000-devel
CC'd netdev, and e1000-devel
On Thu, 17 Nov 2011 17:27:00 -0800
Steven Rostedt <rostedt@goodmis.org> wrote:
> I hit the following lockdep splat:
>
> ======================================================
> [ INFO: possible circular locking dependency detected ]
> 3.2.0-rc2-test+ #14
> -------------------------------------------------------
> reboot/2316 is trying to acquire lock:
> ((&(&adapter->watchdog_task)->work)){+.+...}, at: [<ffffffff81069553>] wait_on_work+0x0/0xac
>
> but task is already holding lock:
> (&adapter->mutex){+.+...}, at: [<ffffffff81359b1d>] __e1000_shutdown+0x56/0x1f5
>
> which lock already depends on the new lock.
>
>
> the existing dependency chain (in reverse order) is:
>
> -> #1 (&adapter->mutex){+.+...}:
> [<ffffffff8108261a>] lock_acquire+0x103/0x158
> [<ffffffff8150bcf3>] __mutex_lock_common+0x6a/0x441
> [<ffffffff8150c13d>] mutex_lock_nested+0x1b/0x1d
> [<ffffffff81359288>] e1000_watchdog+0x56/0x4a4
> [<ffffffff8106a1b0>] process_one_work+0x1ef/0x3e0
> [<ffffffff8106b4e0>] worker_thread+0xda/0x15e
> [<ffffffff8106f00e>] kthread+0x9f/0xa7
> [<ffffffff81514e84>] kernel_thread_helper+0x4/0x10
>
> -> #0 ((&(&adapter->watchdog_task)->work)){+.+...}:
> [<ffffffff81081e4a>] __lock_acquire+0xa29/0xd06
> [<ffffffff8108261a>] lock_acquire+0x103/0x158
> [<ffffffff81069590>] wait_on_work+0x3d/0xac
> [<ffffffff8106a616>] __cancel_work_timer+0xb9/0xff
> [<ffffffff8106a66e>] cancel_delayed_work_sync+0x12/0x14
> [<ffffffff81355c8f>] e1000_down_and_stop+0x2e/0x4a
> [<ffffffff813581ed>] e1000_down+0x116/0x176
> [<ffffffff81359b4a>] __e1000_shutdown+0x83/0x1f5
> [<ffffffff81359cd6>] e1000_shutdown+0x1a/0x43
> [<ffffffff8126fdad>] pci_device_shutdown+0x29/0x3d
> [<ffffffff8130c601>] device_shutdown+0xbe/0xf9
> [<ffffffff81065b17>] kernel_restart_prepare+0x31/0x38
> [<ffffffff81065b32>] kernel_restart+0x14/0x51
> [<ffffffff81065cd8>] sys_reboot+0x157/0x1b0
> [<ffffffff81513882>] system_call_fastpath+0x16/0x1b
>
> other info that might help us debug this:
>
> Possible unsafe locking scenario:
>
> CPU0 CPU1
> ---- ----
> lock(&adapter->mutex);
> lock((&(&adapter->watchdog_task)->work));
> lock(&adapter->mutex);
> lock((&(&adapter->watchdog_task)->work));
>
> *** DEADLOCK ***
>
> 2 locks held by reboot/2316:
> #0: (reboot_mutex){+.+.+.}, at: [<ffffffff81065c20>] sys_reboot+0x9f/0x1b0
> #1: (&adapter->mutex){+.+...}, at: [<ffffffff81359b1d>] __e1000_shutdown+0x56/0x1f5
>
> stack backtrace:
> Pid: 2316, comm: reboot Not tainted 3.2.0-rc2-test+ #14
> Call Trace:
> [<ffffffff81503eb2>] print_circular_bug+0x1f8/0x209
> [<ffffffff81081e4a>] __lock_acquire+0xa29/0xd06
> [<ffffffff81069553>] ? wait_on_cpu_work+0x94/0x94
> [<ffffffff8108261a>] lock_acquire+0x103/0x158
> [<ffffffff81069553>] ? wait_on_cpu_work+0x94/0x94
> [<ffffffff810c7caf>] ? trace_preempt_on+0x2a/0x2f
> [<ffffffff81069590>] wait_on_work+0x3d/0xac
> [<ffffffff81069553>] ? wait_on_cpu_work+0x94/0x94
> [<ffffffff8106a616>] __cancel_work_timer+0xb9/0xff
> [<ffffffff8106a66e>] cancel_delayed_work_sync+0x12/0x14
> [<ffffffff81355c8f>] e1000_down_and_stop+0x2e/0x4a
> [<ffffffff813581ed>] e1000_down+0x116/0x176
> [<ffffffff81359b4a>] __e1000_shutdown+0x83/0x1f5
> [<ffffffff8150d51c>] ? _raw_spin_unlock+0x33/0x56
> [<ffffffff8130c583>] ? device_shutdown+0x40/0xf9
> [<ffffffff81359cd6>] e1000_shutdown+0x1a/0x43
> [<ffffffff81510757>] ? sub_preempt_count+0xa1/0xb4
> [<ffffffff8126fdad>] pci_device_shutdown+0x29/0x3d
> [<ffffffff8130c601>] device_shutdown+0xbe/0xf9
> [<ffffffff81065b17>] kernel_restart_prepare+0x31/0x38
> [<ffffffff81065b32>] kernel_restart+0x14/0x51
> [<ffffffff81065cd8>] sys_reboot+0x157/0x1b0
> [<ffffffff81072ccb>] ? hrtimer_cancel+0x17/0x24
> [<ffffffff8150c304>] ? do_nanosleep+0x74/0xac
> [<ffffffff8125c72d>] ? trace_hardirqs_off_thunk+0x3a/0x3c
> [<ffffffff8150e066>] ? error_sti+0x5/0x6
> [<ffffffff810c7c80>] ? time_hardirqs_off+0x2a/0x2f
> [<ffffffff8125c6ee>] ? trace_hardirqs_on_thunk+0x3a/0x3f
> [<ffffffff8150db5d>] ? retint_swapgs+0x13/0x1b
> [<ffffffff8150db5d>] ? retint_swapgs+0x13/0x1b
> [<ffffffff81082a78>] ? trace_hardirqs_on_caller+0x12d/0x164
> [<ffffffff810a74ce>] ? audit_syscall_entry+0x11c/0x148
> [<ffffffff8125c6ee>] ? trace_hardirqs_on_thunk+0x3a/0x3f
> [<ffffffff81513882>] system_call_fastpath+0x16/0x1b
>
>
> The issue comes from two recent commits:
>
> commit a4010afef585b7142eb605e3a6e4210c0e1b2957
> Author: Jesse Brandeburg <jesse.brandeburg@intel.com>
> Date: Wed Oct 5 07:24:41 2011 +0000
> e1000: convert hardware management from timers to threads
>
> and
>
> commit 0ef4eedc2e98edd51cd106e1f6a27178622b7e57
> Author: Jesse Brandeburg <jesse.brandeburg@intel.com>
> Date: Wed Oct 5 07:24:51 2011 +0000
> e1000: convert to private mutex from rtnl
>
>
> What we have is on __e1000_shutdown():
>
> mutex_lock(&adapter->mutex);
>
> if (netif_running(netdev)) {
> WARN_ON(test_bit(__E1000_RESETTING, &adapter->flags));
> e1000_down(adapter);
> }
>
> but e1000_down() calls: e1000_down_and_stop():
>
> static void e1000_down_and_stop(struct e1000_adapter *adapter)
> {
> set_bit(__E1000_DOWN, &adapter->flags);
> cancel_work_sync(&adapter->reset_task);
> cancel_delayed_work_sync(&adapter->watchdog_task);
> cancel_delayed_work_sync(&adapter->phy_info_task);
> cancel_delayed_work_sync(&adapter->fifo_stall_task);
> }
>
>
> Here you see that we are calling cancel_delayed_work_sync(&adapter->watchdog_task);
>
> The problem is that adapter->watchdog_task grabs the mutex &adapter->mutex.
>
> If the work has started and it blocked on that mutex, the
> cancel_delayed_work_sync() will block indefinitely and we have a
> deadlock.
>
> Not sure what's the best way around this. Can we call e1000_down()
> without grabbing the adapter->mutex?
Thanks for the report, I'll look at it today and see if I can work out
a way to avoid the bonk.
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [BUG] e1000: possible deadlock scenario caught by lockdep
2011-11-18 16:57 ` Jesse Brandeburg
@ 2011-11-18 23:17 ` Jesse Brandeburg
2011-11-19 1:47 ` Steven Rostedt
0 siblings, 1 reply; 4+ messages in thread
From: Jesse Brandeburg @ 2011-11-18 23:17 UTC (permalink / raw)
To: Jesse Brandeburg
Cc: Steven Rostedt, LKML, Thomas Gleixner, Dave, Tushar N,
Brown, Aaron F, Kirsher, Jeffrey T, netdev@vger.kernel.org,
e1000-devel@lists.sourceforge.net
On Fri, 18 Nov 2011 08:57:37 -0800
Jesse Brandeburg <jesse.brandeburg@intel.com> wrote:
> CC'd netdev, and e1000-devel
>
> On Thu, 17 Nov 2011 17:27:00 -0800
> Steven Rostedt <rostedt@goodmis.org> wrote:
> > Here you see that we are calling cancel_delayed_work_sync(&adapter->watchdog_task);
> >
> > The problem is that adapter->watchdog_task grabs the mutex &adapter->mutex.
> >
> > If the work has started and it blocked on that mutex, the
> > cancel_delayed_work_sync() will block indefinitely and we have a
> > deadlock.
> >
> > Not sure what's the best way around this. Can we call e1000_down()
> > without grabbing the adapter->mutex?
>
> Thanks for the report, I'll look at it today and see if I can work out
> a way to avoid the bonk.
this is a proposed patch to fix the issue:
if it works for you please let me know and I will submit it officially
through our process
e1000: fix lockdep splat in shutdown handler
From: Jesse Brandeburg <jesse.brandeburg@intel.com>
as reported by Steven Rostedt, e1000 has a lockdep splat added
during the recent merge window. The issue is that
cancel_delayed_work is called while holding our private mutex.
There is no reason that I can see to hold the mutex during pci
shutdown, it was more just paranoia that I put the mutex_lock
around the call to e1000_down.
in a quick survey lots of drivers handle locking differently when
being called by the pci layer. The assumption here is that we
don't need the mutexes' protection in this function because
the driver could not be unloaded while in the shutdown handler
which is only called at reboot or poweroff.
Reported-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
---
drivers/net/ethernet/intel/e1000/e1000_main.c | 8 +-------
1 files changed, 1 insertions(+), 7 deletions(-)
diff --git a/drivers/net/ethernet/intel/e1000/e1000_main.c b/drivers/net/ethernet/intel/e1000/e1000_main.c
index cf480b5..97b46ba 100644
--- a/drivers/net/ethernet/intel/e1000/e1000_main.c
+++ b/drivers/net/ethernet/intel/e1000/e1000_main.c
@@ -4716,8 +4716,6 @@ static int __e1000_shutdown(struct pci_dev *pdev, bool *enable_wake)
netif_device_detach(netdev);
- mutex_lock(&adapter->mutex);
-
if (netif_running(netdev)) {
WARN_ON(test_bit(__E1000_RESETTING, &adapter->flags));
e1000_down(adapter);
@@ -4725,10 +4723,8 @@ static int __e1000_shutdown(struct pci_dev *pdev, bool *enable_wake)
#ifdef CONFIG_PM
retval = pci_save_state(pdev);
- if (retval) {
- mutex_unlock(&adapter->mutex);
+ if (retval)
return retval;
- }
#endif
status = er32(STATUS);
@@ -4783,8 +4779,6 @@ static int __e1000_shutdown(struct pci_dev *pdev, bool *enable_wake)
if (netif_running(netdev))
e1000_free_irq(adapter);
- mutex_unlock(&adapter->mutex);
-
pci_disable_device(pdev);
return 0;
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [BUG] e1000: possible deadlock scenario caught by lockdep
2011-11-18 23:17 ` Jesse Brandeburg
@ 2011-11-19 1:47 ` Steven Rostedt
0 siblings, 0 replies; 4+ messages in thread
From: Steven Rostedt @ 2011-11-19 1:47 UTC (permalink / raw)
To: Jesse Brandeburg
Cc: LKML, Thomas Gleixner, Dave, Tushar N, Brown, Aaron F,
Kirsher, Jeffrey T, netdev@vger.kernel.org,
e1000-devel@lists.sourceforge.net
On Fri, 2011-11-18 at 15:17 -0800, Jesse Brandeburg wrote:
> this is a proposed patch to fix the issue:
> if it works for you please let me know and I will submit it officially
> through our process
Well, the one time I tested it, it didn't crash and it didn't give a
lockdep splat either. I'll add it and reboot it a few more times and
I'll holler if something bad happens.
Tested-by: Steven Rostedt <rostedt@goodmis.org>
-- Steve
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2011-11-19 1:47 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-11-18 1:27 [BUG] e1000: possible deadlock scenario caught by lockdep Steven Rostedt
2011-11-18 16:57 ` Jesse Brandeburg
2011-11-18 23:17 ` Jesse Brandeburg
2011-11-19 1:47 ` Steven Rostedt
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).