From mboxrd@z Thu Jan 1 00:00:00 1970 From: Ron Mercer Subject: Re: qlge warning Date: Sun, 5 Dec 2010 14:27:18 -0800 Message-ID: <20101205222718.GA2609@linux-ox1b.qlogic.org> References: <4CF95995.1070506@gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Yinghai Lu , David Miller , NetDev , Ingo Molnar , Linux Driver To: Jarek Poplawski Return-path: Received: from cain.qlogic.com ([198.70.193.223]:35213 "EHLO cain.qlc.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751149Ab0LEXCq (ORCPT ); Sun, 5 Dec 2010 18:02:46 -0500 Content-Disposition: inline In-Reply-To: <4CF95995.1070506@gmail.com> Sender: netdev-owner@vger.kernel.org List-ID: OK, I see the point. We are working on it. Thanks On Fri, Dec 03, 2010 at 12:56:53PM -0800, Jarek Poplawski wrote: > It looks like cancel_delayed_work_sync in ql_adapter_down is illegal. > We can't sync works with rtnl_lock while holding it in qlge_close. > > Maintainers CC'ed. > > Jarek P. > > Yinghai Lu wrote: > > [ 290.233264] ======================================================= > > [ 290.251780] [ INFO: possible circular locking dependency detected ] > > [ 290.271534] 2.6.37-rc4-tip-yh-05919-geb30094-dirty #308 > > [ 290.271775] ------------------------------------------------------- > > [ 290.291512] swapper/1 is trying to acquire lock: > > [ 290.291725] ((&(&qdev->mpi_port_cfg_work)->work)){+.+...}, at: > > [] wait_on_work+0x0/0xff > > [ 290.311643] > > [ 290.311644] but task is already holding lock: > > [ 290.311915] (rtnl_mutex){+.+.+.}, at: [] > > rtnl_lock+0x17/0x19 > > [ 290.331681] > > [ 290.331682] which lock already depends on the new lock. > > [ 290.331684] > > [ 290.351491] > > [ 290.351492] the existing dependency chain (in reverse order) is: > > [ 290.351830] > > [ 290.351831] -> #1 (rtnl_mutex){+.+.+.}: > > [ 290.371562] [] lock_acquire+0xca/0xf0 > > [ 290.371824] [] mutex_lock_nested+0x60/0x2b8 > > [ 290.391539] [] rtnl_lock+0x17/0x19 > > [ 290.411250] [] ql_mpi_port_cfg_work+0x1f/0x1ad > > [ 290.411606] [] process_one_work+0x234/0x3e8 > > [ 290.431282] [] worker_thread+0x17f/0x261 > > [ 290.431583] [] kthread+0xa0/0xa8 > > [ 290.451279] [] kernel_thread_helper+0x4/0x10 > > [ 290.451581] > > [ 290.451582] -> #0 ((&(&qdev->mpi_port_cfg_work)->work)){+.+...}: > > [ 290.471483] [] __lock_acquire+0x113c/0x1813 > > [ 290.491177] [] lock_acquire+0xca/0xf0 > > [ 290.491451] [] wait_on_work+0x53/0xff > > [ 290.511128] [] __cancel_work_timer+0xc2/0x102 > > [ 290.511434] [] cancel_delayed_work_sync+0x12/0x14 > > [ 290.531233] [] ql_cancel_all_work_sync+0x64/0x68 > > [ 290.531563] [] ql_adapter_down+0x23/0xf6 > > [ 290.551298] [] qlge_close+0x67/0x76 > > [ 290.571015] [] __dev_close+0x7b/0x89 > > [ 290.571297] [] __dev_change_flags+0xad/0x131 > > [ 290.590974] [] dev_change_flags+0x21/0x57 > > [ 290.591280] [] ic_close_devs+0x2e/0x48 > > [ 290.610978] [] ip_auto_config+0xbc9/0xe84 > > [ 290.611280] [] do_one_initcall+0x57/0x135 > > [ 290.630977] [] kernel_init+0x16c/0x1f6 > > [ 290.631263] [] kernel_thread_helper+0x4/0x10 > > [ 290.651000] > > [ 290.651001] other info that might help us debug this: > > [ 290.651003] > > [ 290.670829] 1 lock held by swapper/1: > > [ 290.671013] #0: (rtnl_mutex){+.+.+.}, at: [] > > rtnl_lock+0x17/0x19 > > [ 290.690819] > > [ 290.690820] stack backtrace: > > [ 290.691054] Pid: 1, comm: swapper Not tainted > > 2.6.37-rc4-tip-yh-05919-geb30094-dirty #308 > > [ 290.710805] Call Trace: > > [ 290.710938] [] ? print_circular_bug+0xaf/0xbe > > [ 290.730683] [] ? __lock_acquire+0x113c/0x1813 > > [ 290.730955] [] ? wait_on_cpu_work+0xdb/0x114 > > [ 290.750672] [] ? wait_on_work+0x0/0xff > > [ 290.750939] [] ? lock_acquire+0xca/0xf0 > > [ 290.770664] [] ? wait_on_work+0x0/0xff > > [ 290.770920] [] ? wait_on_work+0x53/0xff > > [ 290.790575] [] ? wait_on_work+0x0/0xff > > [ 290.790821] [] ? __cancel_work_timer+0xc2/0x102 > > [ 290.810559] [] ? cancel_delayed_work_sync+0x12/0x14 > > [ 290.810855] [] ? ql_cancel_all_work_sync+0x64/0x68 > > [ 290.830594] [] ? ql_adapter_down+0x23/0xf6 > > [ 290.830867] [] ? qlge_close+0x67/0x76 > > [ 290.850568] [] ? __dev_close+0x7b/0x89 > > [ 290.850829] [] ? __dev_change_flags+0xad/0x131 > > [ 290.870540] [] ? dev_change_flags+0x21/0x57 > > [ 290.870815] [] ? ic_close_devs+0x2e/0x48 > > [ 290.890595] [] ? ip_auto_config+0xbc9/0xe84 > > [ 290.910247] [] ? printk+0x41/0x43 > > [ 290.910488] [] ? ip_auto_config+0x0/0xe84 > > [ 290.910747] [] ? do_one_initcall+0x57/0x135 > > [ 290.930455] [] ? kernel_init+0x16c/0x1f6 > > [ 290.930743] [] ? kernel_thread_helper+0x4/0x10 > > [ 290.950419] [] ? restore_args+0x0/0x30 > > [ 290.970152] [] ? kernel_init+0x0/0x1f6 > > [ 290.970398] [] ? kernel_thread_helper+0x0/0x10 > > -- > > To unsubscribe from this list: send the line "unsubscribe netdev" in > > the body of a message to majordomo@vger.kernel.org > > More majordomo info at http://vger.kernel.org/majordomo-info.html > > >