netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* qlge warning
@ 2010-11-30 23:28 Yinghai Lu
  2010-12-03 20:56 ` Jarek Poplawski
  0 siblings, 1 reply; 5+ messages in thread
From: Yinghai Lu @ 2010-11-30 23:28 UTC (permalink / raw)
  To: David Miller, NetDev; +Cc: Ingo Molnar

[  290.233264] =======================================================
[  290.251780] [ INFO: possible circular locking dependency detected ]
[  290.271534] 2.6.37-rc4-tip-yh-05919-geb30094-dirty #308
[  290.271775] -------------------------------------------------------
[  290.291512] swapper/1 is trying to acquire lock:
[  290.291725]  ((&(&qdev->mpi_port_cfg_work)->work)){+.+...}, at:
[<ffffffff81096419>] wait_on_work+0x0/0xff
[  290.311643]
[  290.311644] but task is already holding lock:
[  290.311915]  (rtnl_mutex){+.+.+.}, at: [<ffffffff81bb094d>]
rtnl_lock+0x17/0x19
[  290.331681]
[  290.331682] which lock already depends on the new lock.
[  290.331684]
[  290.351491]
[  290.351492] the existing dependency chain (in reverse order) is:
[  290.351830]
[  290.351831] -> #1 (rtnl_mutex){+.+.+.}:
[  290.371562]        [<ffffffff810ae6b6>] lock_acquire+0xca/0xf0
[  290.371824]        [<ffffffff81cdbf5d>] mutex_lock_nested+0x60/0x2b8
[  290.391539]        [<ffffffff81bb094d>] rtnl_lock+0x17/0x19
[  290.411250]        [<ffffffff818501ad>] ql_mpi_port_cfg_work+0x1f/0x1ad
[  290.411606]        [<ffffffff81095189>] process_one_work+0x234/0x3e8
[  290.431282]        [<ffffffff81095663>] worker_thread+0x17f/0x261
[  290.431583]        [<ffffffff8109a633>] kthread+0xa0/0xa8
[  290.451279]        [<ffffffff8103a914>] kernel_thread_helper+0x4/0x10
[  290.451581]
[  290.451582] -> #0 ((&(&qdev->mpi_port_cfg_work)->work)){+.+...}:
[  290.471483]        [<ffffffff810ada85>] __lock_acquire+0x113c/0x1813
[  290.491177]        [<ffffffff810ae6b6>] lock_acquire+0xca/0xf0
[  290.491451]        [<ffffffff8109646c>] wait_on_work+0x53/0xff
[  290.511128]        [<ffffffff810965da>] __cancel_work_timer+0xc2/0x102
[  290.511434]        [<ffffffff8109662c>] cancel_delayed_work_sync+0x12/0x14
[  290.531233]        [<ffffffff81847646>] ql_cancel_all_work_sync+0x64/0x68
[  290.531563]        [<ffffffff818499d5>] ql_adapter_down+0x23/0xf6
[  290.551298]        [<ffffffff81849ca7>] qlge_close+0x67/0x76
[  290.571015]        [<ffffffff81ba3853>] __dev_close+0x7b/0x89
[  290.571297]        [<ffffffff81ba5535>] __dev_change_flags+0xad/0x131
[  290.590974]        [<ffffffff81ba563a>] dev_change_flags+0x21/0x57
[  290.591280]        [<ffffffff827de30e>] ic_close_devs+0x2e/0x48
[  290.610978]        [<ffffffff827df332>] ip_auto_config+0xbc9/0xe84
[  290.611280]        [<ffffffff810002da>] do_one_initcall+0x57/0x135
[  290.630977]        [<ffffffff8278ef8a>] kernel_init+0x16c/0x1f6
[  290.631263]        [<ffffffff8103a914>] kernel_thread_helper+0x4/0x10
[  290.651000]
[  290.651001] other info that might help us debug this:
[  290.651003]
[  290.670829] 1 lock held by swapper/1:
[  290.671013]  #0:  (rtnl_mutex){+.+.+.}, at: [<ffffffff81bb094d>]
rtnl_lock+0x17/0x19
[  290.690819]
[  290.690820] stack backtrace:
[  290.691054] Pid: 1, comm: swapper Not tainted
2.6.37-rc4-tip-yh-05919-geb30094-dirty #308
[  290.710805] Call Trace:
[  290.710938]  [<ffffffff810aa296>] ? print_circular_bug+0xaf/0xbe
[  290.730683]  [<ffffffff810ada85>] ? __lock_acquire+0x113c/0x1813
[  290.730955]  [<ffffffff81095d70>] ? wait_on_cpu_work+0xdb/0x114
[  290.750672]  [<ffffffff81096419>] ? wait_on_work+0x0/0xff
[  290.750939]  [<ffffffff810ae6b6>] ? lock_acquire+0xca/0xf0
[  290.770664]  [<ffffffff81096419>] ? wait_on_work+0x0/0xff
[  290.770920]  [<ffffffff8109646c>] ? wait_on_work+0x53/0xff
[  290.790575]  [<ffffffff81096419>] ? wait_on_work+0x0/0xff
[  290.790821]  [<ffffffff810965da>] ? __cancel_work_timer+0xc2/0x102
[  290.810559]  [<ffffffff8109662c>] ? cancel_delayed_work_sync+0x12/0x14
[  290.810855]  [<ffffffff81847646>] ? ql_cancel_all_work_sync+0x64/0x68
[  290.830594]  [<ffffffff818499d5>] ? ql_adapter_down+0x23/0xf6
[  290.830867]  [<ffffffff81849ca7>] ? qlge_close+0x67/0x76
[  290.850568]  [<ffffffff81ba3853>] ? __dev_close+0x7b/0x89
[  290.850829]  [<ffffffff81ba5535>] ? __dev_change_flags+0xad/0x131
[  290.870540]  [<ffffffff81ba563a>] ? dev_change_flags+0x21/0x57
[  290.870815]  [<ffffffff827de30e>] ? ic_close_devs+0x2e/0x48
[  290.890595]  [<ffffffff827df332>] ? ip_auto_config+0xbc9/0xe84
[  290.910247]  [<ffffffff81cda1e3>] ? printk+0x41/0x43
[  290.910488]  [<ffffffff827de769>] ? ip_auto_config+0x0/0xe84
[  290.910747]  [<ffffffff810002da>] ? do_one_initcall+0x57/0x135
[  290.930455]  [<ffffffff8278ef8a>] ? kernel_init+0x16c/0x1f6
[  290.930743]  [<ffffffff8103a914>] ? kernel_thread_helper+0x4/0x10
[  290.950419]  [<ffffffff81cde23c>] ? restore_args+0x0/0x30
[  290.970152]  [<ffffffff8278ee1e>] ? kernel_init+0x0/0x1f6
[  290.970398]  [<ffffffff8103a910>] ? kernel_thread_helper+0x0/0x10

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: qlge warning
  2010-11-30 23:28 qlge warning Yinghai Lu
@ 2010-12-03 20:56 ` Jarek Poplawski
  2010-12-05 22:27   ` Ron Mercer
  2010-12-11 21:06   ` [net-2.6 PATCH 1/1] qlge: Fix deadlock when cancelling worker Ron Mercer
  0 siblings, 2 replies; 5+ messages in thread
From: Jarek Poplawski @ 2010-12-03 20:56 UTC (permalink / raw)
  To: Yinghai Lu; +Cc: David Miller, NetDev, Ingo Molnar, Ron Mercer, linux-driver

It looks like cancel_delayed_work_sync in ql_adapter_down is illegal.
We can't sync works with rtnl_lock while holding it in qlge_close.

Maintainers CC'ed.

Jarek P.

Yinghai Lu wrote:
> [  290.233264] =======================================================
> [  290.251780] [ INFO: possible circular locking dependency detected ]
> [  290.271534] 2.6.37-rc4-tip-yh-05919-geb30094-dirty #308
> [  290.271775] -------------------------------------------------------
> [  290.291512] swapper/1 is trying to acquire lock:
> [  290.291725]  ((&(&qdev->mpi_port_cfg_work)->work)){+.+...}, at:
> [<ffffffff81096419>] wait_on_work+0x0/0xff
> [  290.311643]
> [  290.311644] but task is already holding lock:
> [  290.311915]  (rtnl_mutex){+.+.+.}, at: [<ffffffff81bb094d>]
> rtnl_lock+0x17/0x19
> [  290.331681]
> [  290.331682] which lock already depends on the new lock.
> [  290.331684]
> [  290.351491]
> [  290.351492] the existing dependency chain (in reverse order) is:
> [  290.351830]
> [  290.351831] -> #1 (rtnl_mutex){+.+.+.}:
> [  290.371562]        [<ffffffff810ae6b6>] lock_acquire+0xca/0xf0
> [  290.371824]        [<ffffffff81cdbf5d>] mutex_lock_nested+0x60/0x2b8
> [  290.391539]        [<ffffffff81bb094d>] rtnl_lock+0x17/0x19
> [  290.411250]        [<ffffffff818501ad>] ql_mpi_port_cfg_work+0x1f/0x1ad
> [  290.411606]        [<ffffffff81095189>] process_one_work+0x234/0x3e8
> [  290.431282]        [<ffffffff81095663>] worker_thread+0x17f/0x261
> [  290.431583]        [<ffffffff8109a633>] kthread+0xa0/0xa8
> [  290.451279]        [<ffffffff8103a914>] kernel_thread_helper+0x4/0x10
> [  290.451581]
> [  290.451582] -> #0 ((&(&qdev->mpi_port_cfg_work)->work)){+.+...}:
> [  290.471483]        [<ffffffff810ada85>] __lock_acquire+0x113c/0x1813
> [  290.491177]        [<ffffffff810ae6b6>] lock_acquire+0xca/0xf0
> [  290.491451]        [<ffffffff8109646c>] wait_on_work+0x53/0xff
> [  290.511128]        [<ffffffff810965da>] __cancel_work_timer+0xc2/0x102
> [  290.511434]        [<ffffffff8109662c>] cancel_delayed_work_sync+0x12/0x14
> [  290.531233]        [<ffffffff81847646>] ql_cancel_all_work_sync+0x64/0x68
> [  290.531563]        [<ffffffff818499d5>] ql_adapter_down+0x23/0xf6
> [  290.551298]        [<ffffffff81849ca7>] qlge_close+0x67/0x76
> [  290.571015]        [<ffffffff81ba3853>] __dev_close+0x7b/0x89
> [  290.571297]        [<ffffffff81ba5535>] __dev_change_flags+0xad/0x131
> [  290.590974]        [<ffffffff81ba563a>] dev_change_flags+0x21/0x57
> [  290.591280]        [<ffffffff827de30e>] ic_close_devs+0x2e/0x48
> [  290.610978]        [<ffffffff827df332>] ip_auto_config+0xbc9/0xe84
> [  290.611280]        [<ffffffff810002da>] do_one_initcall+0x57/0x135
> [  290.630977]        [<ffffffff8278ef8a>] kernel_init+0x16c/0x1f6
> [  290.631263]        [<ffffffff8103a914>] kernel_thread_helper+0x4/0x10
> [  290.651000]
> [  290.651001] other info that might help us debug this:
> [  290.651003]
> [  290.670829] 1 lock held by swapper/1:
> [  290.671013]  #0:  (rtnl_mutex){+.+.+.}, at: [<ffffffff81bb094d>]
> rtnl_lock+0x17/0x19
> [  290.690819]
> [  290.690820] stack backtrace:
> [  290.691054] Pid: 1, comm: swapper Not tainted
> 2.6.37-rc4-tip-yh-05919-geb30094-dirty #308
> [  290.710805] Call Trace:
> [  290.710938]  [<ffffffff810aa296>] ? print_circular_bug+0xaf/0xbe
> [  290.730683]  [<ffffffff810ada85>] ? __lock_acquire+0x113c/0x1813
> [  290.730955]  [<ffffffff81095d70>] ? wait_on_cpu_work+0xdb/0x114
> [  290.750672]  [<ffffffff81096419>] ? wait_on_work+0x0/0xff
> [  290.750939]  [<ffffffff810ae6b6>] ? lock_acquire+0xca/0xf0
> [  290.770664]  [<ffffffff81096419>] ? wait_on_work+0x0/0xff
> [  290.770920]  [<ffffffff8109646c>] ? wait_on_work+0x53/0xff
> [  290.790575]  [<ffffffff81096419>] ? wait_on_work+0x0/0xff
> [  290.790821]  [<ffffffff810965da>] ? __cancel_work_timer+0xc2/0x102
> [  290.810559]  [<ffffffff8109662c>] ? cancel_delayed_work_sync+0x12/0x14
> [  290.810855]  [<ffffffff81847646>] ? ql_cancel_all_work_sync+0x64/0x68
> [  290.830594]  [<ffffffff818499d5>] ? ql_adapter_down+0x23/0xf6
> [  290.830867]  [<ffffffff81849ca7>] ? qlge_close+0x67/0x76
> [  290.850568]  [<ffffffff81ba3853>] ? __dev_close+0x7b/0x89
> [  290.850829]  [<ffffffff81ba5535>] ? __dev_change_flags+0xad/0x131
> [  290.870540]  [<ffffffff81ba563a>] ? dev_change_flags+0x21/0x57
> [  290.870815]  [<ffffffff827de30e>] ? ic_close_devs+0x2e/0x48
> [  290.890595]  [<ffffffff827df332>] ? ip_auto_config+0xbc9/0xe84
> [  290.910247]  [<ffffffff81cda1e3>] ? printk+0x41/0x43
> [  290.910488]  [<ffffffff827de769>] ? ip_auto_config+0x0/0xe84
> [  290.910747]  [<ffffffff810002da>] ? do_one_initcall+0x57/0x135
> [  290.930455]  [<ffffffff8278ef8a>] ? kernel_init+0x16c/0x1f6
> [  290.930743]  [<ffffffff8103a914>] ? kernel_thread_helper+0x4/0x10
> [  290.950419]  [<ffffffff81cde23c>] ? restore_args+0x0/0x30
> [  290.970152]  [<ffffffff8278ee1e>] ? kernel_init+0x0/0x1f6
> [  290.970398]  [<ffffffff8103a910>] ? kernel_thread_helper+0x0/0x10
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: qlge warning
  2010-12-03 20:56 ` Jarek Poplawski
@ 2010-12-05 22:27   ` Ron Mercer
  2010-12-11 21:06   ` [net-2.6 PATCH 1/1] qlge: Fix deadlock when cancelling worker Ron Mercer
  1 sibling, 0 replies; 5+ messages in thread
From: Ron Mercer @ 2010-12-05 22:27 UTC (permalink / raw)
  To: Jarek Poplawski
  Cc: Yinghai Lu, David Miller, NetDev, Ingo Molnar, Linux Driver

OK, I see the point.  We are working on it.
Thanks


On Fri, Dec 03, 2010 at 12:56:53PM -0800, Jarek Poplawski wrote:
> It looks like cancel_delayed_work_sync in ql_adapter_down is illegal.
> We can't sync works with rtnl_lock while holding it in qlge_close.
> 
> Maintainers CC'ed.
> 
> Jarek P.
> 
> Yinghai Lu wrote:
> > [  290.233264] =======================================================
> > [  290.251780] [ INFO: possible circular locking dependency detected ]
> > [  290.271534] 2.6.37-rc4-tip-yh-05919-geb30094-dirty #308
> > [  290.271775] -------------------------------------------------------
> > [  290.291512] swapper/1 is trying to acquire lock:
> > [  290.291725]  ((&(&qdev->mpi_port_cfg_work)->work)){+.+...}, at:
> > [<ffffffff81096419>] wait_on_work+0x0/0xff
> > [  290.311643]
> > [  290.311644] but task is already holding lock:
> > [  290.311915]  (rtnl_mutex){+.+.+.}, at: [<ffffffff81bb094d>]
> > rtnl_lock+0x17/0x19
> > [  290.331681]
> > [  290.331682] which lock already depends on the new lock.
> > [  290.331684]
> > [  290.351491]
> > [  290.351492] the existing dependency chain (in reverse order) is:
> > [  290.351830]
> > [  290.351831] -> #1 (rtnl_mutex){+.+.+.}:
> > [  290.371562]        [<ffffffff810ae6b6>] lock_acquire+0xca/0xf0
> > [  290.371824]        [<ffffffff81cdbf5d>] mutex_lock_nested+0x60/0x2b8
> > [  290.391539]        [<ffffffff81bb094d>] rtnl_lock+0x17/0x19
> > [  290.411250]        [<ffffffff818501ad>] ql_mpi_port_cfg_work+0x1f/0x1ad
> > [  290.411606]        [<ffffffff81095189>] process_one_work+0x234/0x3e8
> > [  290.431282]        [<ffffffff81095663>] worker_thread+0x17f/0x261
> > [  290.431583]        [<ffffffff8109a633>] kthread+0xa0/0xa8
> > [  290.451279]        [<ffffffff8103a914>] kernel_thread_helper+0x4/0x10
> > [  290.451581]
> > [  290.451582] -> #0 ((&(&qdev->mpi_port_cfg_work)->work)){+.+...}:
> > [  290.471483]        [<ffffffff810ada85>] __lock_acquire+0x113c/0x1813
> > [  290.491177]        [<ffffffff810ae6b6>] lock_acquire+0xca/0xf0
> > [  290.491451]        [<ffffffff8109646c>] wait_on_work+0x53/0xff
> > [  290.511128]        [<ffffffff810965da>] __cancel_work_timer+0xc2/0x102
> > [  290.511434]        [<ffffffff8109662c>] cancel_delayed_work_sync+0x12/0x14
> > [  290.531233]        [<ffffffff81847646>] ql_cancel_all_work_sync+0x64/0x68
> > [  290.531563]        [<ffffffff818499d5>] ql_adapter_down+0x23/0xf6
> > [  290.551298]        [<ffffffff81849ca7>] qlge_close+0x67/0x76
> > [  290.571015]        [<ffffffff81ba3853>] __dev_close+0x7b/0x89
> > [  290.571297]        [<ffffffff81ba5535>] __dev_change_flags+0xad/0x131
> > [  290.590974]        [<ffffffff81ba563a>] dev_change_flags+0x21/0x57
> > [  290.591280]        [<ffffffff827de30e>] ic_close_devs+0x2e/0x48
> > [  290.610978]        [<ffffffff827df332>] ip_auto_config+0xbc9/0xe84
> > [  290.611280]        [<ffffffff810002da>] do_one_initcall+0x57/0x135
> > [  290.630977]        [<ffffffff8278ef8a>] kernel_init+0x16c/0x1f6
> > [  290.631263]        [<ffffffff8103a914>] kernel_thread_helper+0x4/0x10
> > [  290.651000]
> > [  290.651001] other info that might help us debug this:
> > [  290.651003]
> > [  290.670829] 1 lock held by swapper/1:
> > [  290.671013]  #0:  (rtnl_mutex){+.+.+.}, at: [<ffffffff81bb094d>]
> > rtnl_lock+0x17/0x19
> > [  290.690819]
> > [  290.690820] stack backtrace:
> > [  290.691054] Pid: 1, comm: swapper Not tainted
> > 2.6.37-rc4-tip-yh-05919-geb30094-dirty #308
> > [  290.710805] Call Trace:
> > [  290.710938]  [<ffffffff810aa296>] ? print_circular_bug+0xaf/0xbe
> > [  290.730683]  [<ffffffff810ada85>] ? __lock_acquire+0x113c/0x1813
> > [  290.730955]  [<ffffffff81095d70>] ? wait_on_cpu_work+0xdb/0x114
> > [  290.750672]  [<ffffffff81096419>] ? wait_on_work+0x0/0xff
> > [  290.750939]  [<ffffffff810ae6b6>] ? lock_acquire+0xca/0xf0
> > [  290.770664]  [<ffffffff81096419>] ? wait_on_work+0x0/0xff
> > [  290.770920]  [<ffffffff8109646c>] ? wait_on_work+0x53/0xff
> > [  290.790575]  [<ffffffff81096419>] ? wait_on_work+0x0/0xff
> > [  290.790821]  [<ffffffff810965da>] ? __cancel_work_timer+0xc2/0x102
> > [  290.810559]  [<ffffffff8109662c>] ? cancel_delayed_work_sync+0x12/0x14
> > [  290.810855]  [<ffffffff81847646>] ? ql_cancel_all_work_sync+0x64/0x68
> > [  290.830594]  [<ffffffff818499d5>] ? ql_adapter_down+0x23/0xf6
> > [  290.830867]  [<ffffffff81849ca7>] ? qlge_close+0x67/0x76
> > [  290.850568]  [<ffffffff81ba3853>] ? __dev_close+0x7b/0x89
> > [  290.850829]  [<ffffffff81ba5535>] ? __dev_change_flags+0xad/0x131
> > [  290.870540]  [<ffffffff81ba563a>] ? dev_change_flags+0x21/0x57
> > [  290.870815]  [<ffffffff827de30e>] ? ic_close_devs+0x2e/0x48
> > [  290.890595]  [<ffffffff827df332>] ? ip_auto_config+0xbc9/0xe84
> > [  290.910247]  [<ffffffff81cda1e3>] ? printk+0x41/0x43
> > [  290.910488]  [<ffffffff827de769>] ? ip_auto_config+0x0/0xe84
> > [  290.910747]  [<ffffffff810002da>] ? do_one_initcall+0x57/0x135
> > [  290.930455]  [<ffffffff8278ef8a>] ? kernel_init+0x16c/0x1f6
> > [  290.930743]  [<ffffffff8103a914>] ? kernel_thread_helper+0x4/0x10
> > [  290.950419]  [<ffffffff81cde23c>] ? restore_args+0x0/0x30
> > [  290.970152]  [<ffffffff8278ee1e>] ? kernel_init+0x0/0x1f6
> > [  290.970398]  [<ffffffff8103a910>] ? kernel_thread_helper+0x0/0x10
> > --
> > To unsubscribe from this list: send the line "unsubscribe netdev" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > 
> 

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [net-2.6 PATCH 1/1] qlge: Fix deadlock when cancelling worker.
  2010-12-03 20:56 ` Jarek Poplawski
  2010-12-05 22:27   ` Ron Mercer
@ 2010-12-11 21:06   ` Ron Mercer
  2010-12-12 23:04     ` David Miller
  1 sibling, 1 reply; 5+ messages in thread
From: Ron Mercer @ 2010-12-11 21:06 UTC (permalink / raw)
  To: davem; +Cc: netdev, ron.mercer, jarkao2, mingo, Linux-Driver

Removing usage of rtnl_lock() to protect firmware interface registers.
These registers are accessed in some worker threads and can create a
deadlock if rtnl_lock is taken by upper layers while the worker is still
pending.
We remove rtnl_lock and use a driver mutex just while mailboxes are
accessed.

Signed-off-by: Ron Mercer <ron.mercer@qlogic.com>
---
 drivers/net/qlge/qlge.h      |    1 +
 drivers/net/qlge/qlge_main.c |    1 +
 drivers/net/qlge/qlge_mpi.c  |   12 ++++--------
 3 files changed, 6 insertions(+), 8 deletions(-)

diff --git a/drivers/net/qlge/qlge.h b/drivers/net/qlge/qlge.h
index 2282139..9787dff 100644
--- a/drivers/net/qlge/qlge.h
+++ b/drivers/net/qlge/qlge.h
@@ -2083,6 +2083,7 @@ struct ql_adapter {
 	u32 mailbox_in;
 	u32 mailbox_out;
 	struct mbox_params idc_mbc;
+	struct mutex	mpi_mutex;
 
 	int tx_ring_size;
 	int rx_ring_size;
diff --git a/drivers/net/qlge/qlge_main.c b/drivers/net/qlge/qlge_main.c
index 528eaef..2555b1d 100644
--- a/drivers/net/qlge/qlge_main.c
+++ b/drivers/net/qlge/qlge_main.c
@@ -4629,6 +4629,7 @@ static int __devinit ql_init_device(struct pci_dev *pdev,
 	INIT_DELAYED_WORK(&qdev->mpi_idc_work, ql_mpi_idc_work);
 	INIT_DELAYED_WORK(&qdev->mpi_core_to_log, ql_mpi_core_to_log);
 	init_completion(&qdev->ide_completion);
+	mutex_init(&qdev->mpi_mutex);
 
 	if (!cards_found) {
 		dev_info(&pdev->dev, "%s\n", DRV_STRING);
diff --git a/drivers/net/qlge/qlge_mpi.c b/drivers/net/qlge/qlge_mpi.c
index 0e7c7c7..a2e919b 100644
--- a/drivers/net/qlge/qlge_mpi.c
+++ b/drivers/net/qlge/qlge_mpi.c
@@ -534,6 +534,7 @@ static int ql_mailbox_command(struct ql_adapter *qdev, struct mbox_params *mbcp)
 	int status;
 	unsigned long count;
 
+	mutex_lock(&qdev->mpi_mutex);
 
 	/* Begin polled mode for MPI */
 	ql_write32(qdev, INTR_MASK, (INTR_MASK_PI << 16));
@@ -603,6 +604,7 @@ done:
 end:
 	/* End polled mode for MPI */
 	ql_write32(qdev, INTR_MASK, (INTR_MASK_PI << 16) | INTR_MASK_PI);
+	mutex_unlock(&qdev->mpi_mutex);
 	return status;
 }
 
@@ -1099,9 +1101,7 @@ int ql_wait_fifo_empty(struct ql_adapter *qdev)
 static int ql_set_port_cfg(struct ql_adapter *qdev)
 {
 	int status;
-	rtnl_lock();
 	status = ql_mb_set_port_cfg(qdev);
-	rtnl_unlock();
 	if (status)
 		return status;
 	status = ql_idc_wait(qdev);
@@ -1122,9 +1122,7 @@ void ql_mpi_port_cfg_work(struct work_struct *work)
 	    container_of(work, struct ql_adapter, mpi_port_cfg_work.work);
 	int status;
 
-	rtnl_lock();
 	status = ql_mb_get_port_cfg(qdev);
-	rtnl_unlock();
 	if (status) {
 		netif_err(qdev, drv, qdev->ndev,
 			  "Bug: Failed to get port config data.\n");
@@ -1167,7 +1165,6 @@ void ql_mpi_idc_work(struct work_struct *work)
 	u32 aen;
 	int timeout;
 
-	rtnl_lock();
 	aen = mbcp->mbox_out[1] >> 16;
 	timeout = (mbcp->mbox_out[1] >> 8) & 0xf;
 
@@ -1231,7 +1228,6 @@ void ql_mpi_idc_work(struct work_struct *work)
 		}
 		break;
 	}
-	rtnl_unlock();
 }
 
 void ql_mpi_work(struct work_struct *work)
@@ -1242,7 +1238,7 @@ void ql_mpi_work(struct work_struct *work)
 	struct mbox_params *mbcp = &mbc;
 	int err = 0;
 
-	rtnl_lock();
+	mutex_lock(&qdev->mpi_mutex);
 	/* Begin polled mode for MPI */
 	ql_write32(qdev, INTR_MASK, (INTR_MASK_PI << 16));
 
@@ -1259,7 +1255,7 @@ void ql_mpi_work(struct work_struct *work)
 
 	/* End polled mode for MPI */
 	ql_write32(qdev, INTR_MASK, (INTR_MASK_PI << 16) | INTR_MASK_PI);
-	rtnl_unlock();
+	mutex_unlock(&qdev->mpi_mutex);
 	ql_enable_completion_interrupt(qdev, 0);
 }
 
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [net-2.6 PATCH 1/1] qlge: Fix deadlock when cancelling worker.
  2010-12-11 21:06   ` [net-2.6 PATCH 1/1] qlge: Fix deadlock when cancelling worker Ron Mercer
@ 2010-12-12 23:04     ` David Miller
  0 siblings, 0 replies; 5+ messages in thread
From: David Miller @ 2010-12-12 23:04 UTC (permalink / raw)
  To: ron.mercer; +Cc: netdev, jarkao2, mingo, Linux-Driver

From: Ron Mercer <ron.mercer@qlogic.com>
Date: Sat, 11 Dec 2010 13:06:50 -0800

> Removing usage of rtnl_lock() to protect firmware interface registers.
> These registers are accessed in some worker threads and can create a
> deadlock if rtnl_lock is taken by upper layers while the worker is still
> pending.
> We remove rtnl_lock and use a driver mutex just while mailboxes are
> accessed.
> 
> Signed-off-by: Ron Mercer <ron.mercer@qlogic.com>

Applied.

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2010-12-12 23:03 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-11-30 23:28 qlge warning Yinghai Lu
2010-12-03 20:56 ` Jarek Poplawski
2010-12-05 22:27   ` Ron Mercer
2010-12-11 21:06   ` [net-2.6 PATCH 1/1] qlge: Fix deadlock when cancelling worker Ron Mercer
2010-12-12 23:04     ` David Miller

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).