From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jarek Poplawski Subject: [PATCH net-next] net: fix lockdep warning in qdisc_lock_tree() Date: Thu, 10 Jul 2008 20:14:16 +0200 Message-ID: <20080710181416.GA8265@ami.dom.local> References: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Return-path: Received: from yx-out-2324.google.com ([74.125.44.30]:37216 "EHLO yx-out-2324.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752676AbYGJSOe (ORCPT ); Thu, 10 Jul 2008 14:14:34 -0400 Received: by yx-out-2324.google.com with SMTP id 8so1158821yxm.1 for ; Thu, 10 Jul 2008 11:14:31 -0700 (PDT) Content-Disposition: inline In-Reply-To: Sender: linux-next-owner@vger.kernel.org List-ID: To: Alexander Beregalov Cc: Linux Kernel Mailing List , linux-next@vger.kernel.org, netdev@vger.kernel.org, David Miller Alexander Beregalov wrote, On 07/10/2008 11:04 AM: ... > [ 0.426638] ============================================= > [ 0.426638] [ INFO: possible recursive locking detected ] > [ 0.426638] 2.6.26-rc9-next-20080710 #5 > [ 0.426638] --------------------------------------------- > [ 0.426638] swapper/1 is trying to acquire lock: > [ 0.426638] (&queue->lock){-...}, at: [] > qdisc_lock_tree+0x27/0x2c > [ 0.426638] > [ 0.426638] but task is already holding lock: > [ 0.426638] (&queue->lock){-...}, at: [] > qdisc_lock_tree+0x1f/0x2c > [ 0.426638] > [ 0.426638] other info that might help us debug this: > [ 0.426638] 3 locks held by swapper/1: > [ 0.426638] #0: (net_mutex){--..}, at: [] > register_pernet_device+0x1a/0x5a > [ 0.426638] #1: (rtnl_mutex){--..}, at: [] > rtnl_lock+0x12/0x14 > [ 0.426638] #2: (&queue->lock){-...}, at: [] > qdisc_lock_tree+0x1f/0x2c > [ 0.426638] > [ 0.426638] stack backtrace: > [ 0.426638] Pid: 1, comm: swapper Not tainted 2.6.26-rc9-next-20080710 #5 > [ 0.426638] > [ 0.426638] Call Trace: > [ 0.426638] [] __lock_acquire+0xba9/0xf12 > [ 0.426638] [] ? qdisc_lock_tree+0x27/0x2c > [ 0.426638] [] lock_acquire+0x85/0xa9 > [ 0.426638] [] ? qdisc_lock_tree+0x27/0x2c > [ 0.426638] [] _spin_lock+0x25/0x31 > [ 0.426638] [] qdisc_lock_tree+0x27/0x2c > [ 0.426638] [] dev_init_scheduler+0x11/0x94 > [ 0.426638] [] register_netdevice+0x296/0x3f0 > [ 0.426638] [] register_netdev+0x3a/0x48 > [ 0.426638] [] loopback_net_init+0x40/0x7a > [ 0.426638] [] ? loopback_init+0x0/0x12 ... lockdep needs separate lock init to distinguish rx and tx queue locks. (There is no real lockup danger.) Reported-by: Alexander Beregalov Signed-off-by: Jarek Poplawski --- net/core/dev.c | 20 ++++++++++++++------ 1 files changed, 14 insertions(+), 6 deletions(-) diff --git a/net/core/dev.c b/net/core/dev.c index a29a359..157b683 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -4090,17 +4090,25 @@ static struct net_device_stats *internal_stats(struct net_device *dev) return &dev->stats; } -static void netdev_init_one_queue(struct net_device *dev, - struct netdev_queue *queue) +static void netdev_init_rx_queue(struct net_device *dev, + struct netdev_queue *rx_queue) { - spin_lock_init(&queue->lock); - queue->dev = dev; + spin_lock_init(&rx_queue->lock); + rx_queue->dev = dev; +} + +/* lockdep needs separate init to distinguish these locks */ +static void netdev_init_tx_queue(struct net_device *dev, + struct netdev_queue *tx_queue) +{ + spin_lock_init(&tx_queue->lock); + tx_queue->dev = dev; } static void netdev_init_queues(struct net_device *dev) { - netdev_init_one_queue(dev, &dev->rx_queue); - netdev_init_one_queue(dev, &dev->tx_queue); + netdev_init_rx_queue(dev, &dev->rx_queue); + netdev_init_tx_queue(dev, &dev->tx_queue); } /**