netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH][RFC] cxgb4: Not need to hold the adap_rcu_lock lock when read adap_rcu_list
@ 2014-06-20  9:32 rongqing.li
  2014-06-23 21:50 ` David Miller
  2014-06-24 22:54 ` David Miller
  0 siblings, 2 replies; 8+ messages in thread
From: rongqing.li @ 2014-06-20  9:32 UTC (permalink / raw)
  To: netdev, eric.dumazet; +Cc: hariprasad, leedom, greearb

From: Li RongQing <roy.qing.li@gmail.com> 

cxgb4_netdev maybe lead to dead lock, since it uses a spin lock, and be called
in both thread and softirq context, but not disable BH, the lockdep report is
below; In fact, cxgb4_netdev only reads adap_rcu_list with RCU protection, so
not need to hold spin lock again.
	=================================
	[ INFO: inconsistent lock state ]
	3.14.7+ #24 Tainted: G         C O
	---------------------------------
	inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-W} usage.
	radvd/3794 [HC0[0]:SC1[1]:HE1:SE0] takes:
	 (adap_rcu_lock){+.?...}, at: [<ffffffffa09989ea>] clip_add+0x2c/0x116 [cxgb4]
	{SOFTIRQ-ON-W} state was registered at:
	  [<ffffffff810fca81>] __lock_acquire+0x34a/0xe48
	  [<ffffffff810fd98b>] lock_acquire+0x82/0x9d
	  [<ffffffff815d6ff8>] _raw_spin_lock+0x34/0x43
	  [<ffffffffa09989ea>] clip_add+0x2c/0x116 [cxgb4]
	  [<ffffffffa0998beb>] cxgb4_inet6addr_handler+0x117/0x12c [cxgb4]
	  [<ffffffff815da98b>] notifier_call_chain+0x32/0x5c
	  [<ffffffff815da9f9>] __atomic_notifier_call_chain+0x44/0x6e
	  [<ffffffff815daa32>] atomic_notifier_call_chain+0xf/0x11
	  [<ffffffff815b1356>] inet6addr_notifier_call_chain+0x16/0x18
	  [<ffffffffa01f72e5>] ipv6_add_addr+0x404/0x46e [ipv6]
	  [<ffffffffa01f8df0>] addrconf_add_linklocal+0x5f/0x95 [ipv6]
	  [<ffffffffa01fc3e9>] addrconf_notify+0x632/0x841 [ipv6]
	  [<ffffffff815da98b>] notifier_call_chain+0x32/0x5c
	  [<ffffffff810e09a1>] __raw_notifier_call_chain+0x9/0xb
	  [<ffffffff810e09b2>] raw_notifier_call_chain+0xf/0x11
	  [<ffffffff8151b3b7>] call_netdevice_notifiers_info+0x4e/0x56
	  [<ffffffff8151b3d0>] call_netdevice_notifiers+0x11/0x13
	  [<ffffffff8151c0a6>] netdev_state_change+0x1f/0x38
	  [<ffffffff8152f004>] linkwatch_do_dev+0x3b/0x49
	  [<ffffffff8152f184>] __linkwatch_run_queue+0x10b/0x144
	  [<ffffffff8152f1dd>] linkwatch_event+0x20/0x27
	  [<ffffffff810d7bc0>] process_one_work+0x1cb/0x2ee
	  [<ffffffff810d7e3b>] worker_thread+0x12e/0x1fc
	  [<ffffffff810dd391>] kthread+0xc4/0xcc
	  [<ffffffff815dc48c>] ret_from_fork+0x7c/0xb0
	irq event stamp: 3388
	hardirqs last  enabled at (3388): [<ffffffff810c6c85>]
	__local_bh_enable_ip+0xaa/0xd9
	hardirqs last disabled at (3387): [<ffffffff810c6c2d>]
	__local_bh_enable_ip+0x52/0xd9
	softirqs last  enabled at (3288): [<ffffffffa01f1d5b>]
	rcu_read_unlock_bh+0x0/0x2f [ipv6]
	softirqs last disabled at (3289): [<ffffffff815ddafc>]
	do_softirq_own_stack+0x1c/0x30

	other info that might help us debug this:
	 Possible unsafe locking scenario:

	       CPU0
	       ----
	  lock(adap_rcu_lock);
	  <Interrupt>
	    lock(adap_rcu_lock);

	 *** DEADLOCK ***

	5 locks held by radvd/3794:
	 #0:  (sk_lock-AF_INET6){+.+.+.}, at: [<ffffffffa020b85a>]
	rawv6_sendmsg+0x74b/0xa4d [ipv6]
	 #1:  (rcu_read_lock){.+.+..}, at: [<ffffffff8151ac6b>]
	rcu_lock_acquire+0x0/0x29
	 #2:  (rcu_read_lock){.+.+..}, at: [<ffffffffa01f4cca>]
	rcu_lock_acquire.constprop.16+0x0/0x30 [ipv6]
	 #3:  (rcu_read_lock){.+.+..}, at: [<ffffffff810e09b4>]
	rcu_lock_acquire+0x0/0x29
	 #4:  (rcu_read_lock){.+.+..}, at: [<ffffffffa0998782>]
	rcu_lock_acquire.constprop.40+0x0/0x30 [cxgb4]

	stack backtrace:
	CPU: 7 PID: 3794 Comm: radvd Tainted: G         C O 3.14.7+ #24
	Hardware name: Supermicro X7DBU/X7DBU, BIOS 6.00 12/03/2007
	 ffffffff81f15990 ffff88012fdc36a8 ffffffff815d0016 0000000000000006
	 ffff8800c80dc2a0 ffff88012fdc3708 ffffffff815cc727 0000000000000001
	 0000000000000001 ffff880100000000 ffffffff81015b02 ffff8800c80dcb58
	Call Trace:
	 <IRQ>  [<ffffffff815d0016>] dump_stack+0x4e/0x71
	 [<ffffffff815cc727>] print_usage_bug+0x1ec/0x1fd
	 [<ffffffff81015b02>] ? save_stack_trace+0x27/0x44
	 [<ffffffff810fbfaa>] ? check_usage_backwards+0xa0/0xa0
	 [<ffffffff810fc640>] mark_lock+0x11b/0x212
	 [<ffffffff810fca0b>] __lock_acquire+0x2d4/0xe48
	 [<ffffffff810fbfaa>] ? check_usage_backwards+0xa0/0xa0
	 [<ffffffff810fbff6>] ? check_usage_forwards+0x4c/0xa6
	 [<ffffffff810c6c8a>] ? __local_bh_enable_ip+0xaf/0xd9
	 [<ffffffff810fd98b>] lock_acquire+0x82/0x9d
	 [<ffffffffa09989ea>] ? clip_add+0x2c/0x116 [cxgb4]
	 [<ffffffffa0998782>] ? rcu_read_unlock+0x23/0x23 [cxgb4]
	 [<ffffffff815d6ff8>] _raw_spin_lock+0x34/0x43
	 [<ffffffffa09989ea>] ? clip_add+0x2c/0x116 [cxgb4]
	 [<ffffffffa09987b0>] ? rcu_lock_acquire.constprop.40+0x2e/0x30 [cxgb4]
	 [<ffffffffa0998782>] ? rcu_read_unlock+0x23/0x23 [cxgb4]
	 [<ffffffffa09989ea>] clip_add+0x2c/0x116 [cxgb4]
	 [<ffffffffa0998beb>] cxgb4_inet6addr_handler+0x117/0x12c [cxgb4]
	 [<ffffffff810fd99d>] ? lock_acquire+0x94/0x9d
	 [<ffffffff810e09b4>] ? raw_notifier_call_chain+0x11/0x11
	 [<ffffffff815da98b>] notifier_call_chain+0x32/0x5c
	 [<ffffffff815da9f9>] __atomic_notifier_call_chain+0x44/0x6e
	 [<ffffffff815daa32>] atomic_notifier_call_chain+0xf/0x11
	 [<ffffffff815b1356>] inet6addr_notifier_call_chain+0x16/0x18
	 [<ffffffffa01f72e5>] ipv6_add_addr+0x404/0x46e [ipv6]
	 [<ffffffff810fde6a>] ? trace_hardirqs_on+0xd/0xf
	 [<ffffffffa01fb634>] addrconf_prefix_rcv+0x385/0x6ea [ipv6]
	 [<ffffffffa0207950>] ndisc_rcv+0x9d3/0xd76 [ipv6]
	 [<ffffffffa020d536>] icmpv6_rcv+0x592/0x67b [ipv6]
	 [<ffffffff810c6c85>] ? __local_bh_enable_ip+0xaa/0xd9
	 [<ffffffff810c6c85>] ? __local_bh_enable_ip+0xaa/0xd9
	 [<ffffffff810fd8dc>] ? lock_release+0x14e/0x17b
	 [<ffffffffa020df97>] ? rcu_read_unlock+0x21/0x23 [ipv6]
	 [<ffffffff8150df52>] ? rcu_read_unlock+0x23/0x23
	 [<ffffffffa01f4ede>] ip6_input_finish+0x1e4/0x2fc [ipv6]
	 [<ffffffffa01f540b>] ip6_input+0x33/0x38 [ipv6]
	 [<ffffffffa01f5557>] ip6_mc_input+0x147/0x160 [ipv6]
	 [<ffffffffa01f4ba3>] ip6_rcv_finish+0x7c/0x81 [ipv6]
	 [<ffffffffa01f5397>] ipv6_rcv+0x3a1/0x3e2 [ipv6]
	 [<ffffffff8151ef96>] __netif_receive_skb_core+0x4ab/0x511
	 [<ffffffff810fdc94>] ? mark_held_locks+0x71/0x99
	 [<ffffffff8151f0c0>] ? process_backlog+0x69/0x15e
	 [<ffffffff8151f045>] __netif_receive_skb+0x49/0x5b
	 [<ffffffff8151f0cf>] process_backlog+0x78/0x15e
	 [<ffffffff8151f571>] ? net_rx_action+0x1a2/0x1cc
	 [<ffffffff8151f47b>] net_rx_action+0xac/0x1cc
	 [<ffffffff810c69b7>] ? __do_softirq+0xad/0x218
	 [<ffffffff810c69ff>] __do_softirq+0xf5/0x218
	 [<ffffffff815ddafc>] do_softirq_own_stack+0x1c/0x30
	 <EOI>  [<ffffffff810c6bb6>] do_softirq+0x38/0x5d
	 [<ffffffffa01f1d5b>] ? ip6_copy_metadata+0x156/0x156 [ipv6]
	 [<ffffffff810c6c78>] __local_bh_enable_ip+0x9d/0xd9
	 [<ffffffffa01f1d88>] rcu_read_unlock_bh+0x2d/0x2f [ipv6]
	 [<ffffffffa01f28b4>] ip6_finish_output2+0x381/0x3d8 [ipv6]
	 [<ffffffffa01f49ef>] ip6_finish_output+0x6e/0x73 [ipv6]
	 [<ffffffffa01f4a70>] ip6_output+0x7c/0xa8 [ipv6]
	 [<ffffffff815b1bfa>] dst_output+0x18/0x1c
	 [<ffffffff815b1c9e>] ip6_local_out+0x1c/0x21
	 [<ffffffffa01f2489>] ip6_push_pending_frames+0x37d/0x427 [ipv6]
	 [<ffffffff81558af8>] ? skb_orphan+0x39/0x39
	 [<ffffffffa020b85a>] ? rawv6_sendmsg+0x74b/0xa4d [ipv6]
	 [<ffffffffa020ba51>] rawv6_sendmsg+0x942/0xa4d [ipv6]
	 [<ffffffff81584cd2>] inet_sendmsg+0x3d/0x66
	 [<ffffffff81508930>] __sock_sendmsg_nosec+0x25/0x27
	 [<ffffffff8150b0d7>] sock_sendmsg+0x5a/0x7b
	 [<ffffffff810fd8dc>] ? lock_release+0x14e/0x17b
	 [<ffffffff8116d756>] ? might_fault+0x9e/0xa5
	 [<ffffffff8116d70d>] ? might_fault+0x55/0xa5
	 [<ffffffff81508cb1>] ? copy_from_user+0x2a/0x2c
	 [<ffffffff8150b70c>] ___sys_sendmsg+0x226/0x2d9
	 [<ffffffff810fcd25>] ? __lock_acquire+0x5ee/0xe48
	 [<ffffffff810fde01>] ? trace_hardirqs_on_caller+0x145/0x1a1
	 [<ffffffff8118efcb>] ? slab_free_hook.isra.71+0x50/0x59
	 [<ffffffff8115c81f>] ? release_pages+0xbc/0x181
	 [<ffffffff810fd99d>] ? lock_acquire+0x94/0x9d
	 [<ffffffff81115e97>] ? read_seqcount_begin.constprop.25+0x73/0x90
	 [<ffffffff8150c408>] __sys_sendmsg+0x3d/0x5b
	 [<ffffffff8150c433>] SyS_sendmsg+0xd/0x19
	 [<ffffffff815dc53d>] system_call_fastpath+0x1a/0x1f

Reported-by: Ben Greear <greearb@candelatech.com>
Cc: Casey Leedom <leedom@chelsio.com>
Cc: Hariprasad Shenai <hariprasad@chelsio.com>
Signed-off-by: Li RongQing <roy.qing.li@gmail.com>
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
---
 drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c |   16 +++++++---------
 1 file changed, 7 insertions(+), 9 deletions(-)

diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
index 2f8d6b9..a83271c 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
@@ -4057,22 +4057,19 @@ int cxgb4_unregister_uld(enum cxgb4_uld type)
 EXPORT_SYMBOL(cxgb4_unregister_uld);
 
 /* Check if netdev on which event is occured belongs to us or not. Return
- * suceess (1) if it belongs otherwise failure (0).
+ * success (true) if it belongs otherwise failure (false).
+ * Called with rcu_read_lock() held.
  */
-static int cxgb4_netdev(struct net_device *netdev)
+static bool cxgb4_netdev(const struct net_device *netdev)
 {
 	struct adapter *adap;
 	int i;
 
-	spin_lock(&adap_rcu_lock);
 	list_for_each_entry_rcu(adap, &adap_rcu_list, rcu_node)
 		for (i = 0; i < MAX_NPORTS; i++)
-			if (adap->port[i] == netdev) {
-				spin_unlock(&adap_rcu_lock);
-				return 1;
-			}
-	spin_unlock(&adap_rcu_lock);
-	return 0;
+			if (adap->port[i] == netdev)
+				return true;
+	return false;
 }
 
 static int clip_add(struct net_device *event_dev, struct inet6_ifaddr *ifa,
@@ -6396,6 +6393,7 @@ static void remove_one(struct pci_dev *pdev)
 			adapter->flags &= ~DEV_ENABLED;
 		}
 		pci_release_regions(pdev);
+		synchronize_rcu();
 		kfree(adapter);
 	} else
 		pci_release_regions(pdev);
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH][RFC] cxgb4: Not need to hold the adap_rcu_lock lock when read adap_rcu_list
  2014-06-20  9:32 [PATCH][RFC] cxgb4: Not need to hold the adap_rcu_lock lock when read adap_rcu_list rongqing.li
@ 2014-06-23 21:50 ` David Miller
  2014-06-23 22:35   ` Casey Leedom
  2014-06-24 22:54 ` David Miller
  1 sibling, 1 reply; 8+ messages in thread
From: David Miller @ 2014-06-23 21:50 UTC (permalink / raw)
  To: rongqing.li; +Cc: netdev, eric.dumazet, hariprasad, leedom, greearb

From: <rongqing.li@windriver.com>
Date: Fri, 20 Jun 2014 17:32:36 +0800

> cxgb4_netdev maybe lead to dead lock, since it uses a spin lock, and be called
> in both thread and softirq context, but not disable BH, the lockdep report is
> below; In fact, cxgb4_netdev only reads adap_rcu_list with RCU protection, so
> not need to hold spin lock again.

I think this change is fine, and correct, but I would like to see some
reviews from the cxgb4 maintainers.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH][RFC] cxgb4: Not need to hold the adap_rcu_lock lock when read adap_rcu_list
  2014-06-23 21:50 ` David Miller
@ 2014-06-23 22:35   ` Casey Leedom
  2014-06-24  0:20     ` Li RongQing
  2014-06-24  6:38     ` Eric Dumazet
  0 siblings, 2 replies; 8+ messages in thread
From: Casey Leedom @ 2014-06-23 22:35 UTC (permalink / raw)
  To: David Miller, rongqing.li; +Cc: netdev, eric.dumazet, hariprasad, greearb


On 06/23/14 14:50, David Miller wrote:
> From: <rongqing.li@windriver.com>
> Date: Fri, 20 Jun 2014 17:32:36 +0800
>
>> cxgb4_netdev maybe lead to dead lock, since it uses a spin lock, and be called
>> in both thread and softirq context, but not disable BH, the lockdep report is
>> below; In fact, cxgb4_netdev only reads adap_rcu_list with RCU protection, so
>> not need to hold spin lock again.
> I think this change is fine, and correct, but I would like to see some
> reviews from the cxgb4 maintainers.

   Thanks David.  Hari is gone on PTO so I think I'm the next logical 
person ... :-)

   I've gone back and reviewed the original patch, Eric Dumazet6's reply 
and revised patch and compared that against this proposed patch.  Li 
RongQing is submitting the same patch that Eric suggested with the 
addition of a call to synchronize_rcu() the in driver remove() 
function.  I'm not super familiar with the RCU system but that addition 
certainly seems innocuous enough.  Other than that, everything looks fine.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH][RFC] cxgb4: Not need to hold the adap_rcu_lock lock when read adap_rcu_list
  2014-06-23 22:35   ` Casey Leedom
@ 2014-06-24  0:20     ` Li RongQing
  2014-06-24  6:38     ` Eric Dumazet
  1 sibling, 0 replies; 8+ messages in thread
From: Li RongQing @ 2014-06-24  0:20 UTC (permalink / raw)
  To: Casey Leedom
  Cc: David Miller, Rongqing Li, netdev, Eric Dumazet, hariprasad,
	Ben Greear

On Tue, Jun 24, 2014 at 6:35 AM, Casey Leedom <leedom@chelsio.com> wrote:
>> I think this change is fine, and correct, but I would like to see some
>> reviews from the cxgb4 maintainers.
>
>
>   Thanks David.  Hari is gone on PTO so I think I'm the next logical person
> ... :-)
>
>   I've gone back and reviewed the original patch, Eric Dumazet6's reply and
> revised patch and compared that against this proposed patch.  Li RongQing is
> submitting the same patch that Eric suggested with the addition of a call to
> synchronize_rcu() the in driver remove() function.  I'm not super familiar
> with the RCU system but that addition certainly seems innocuous enough.
> Other than that, everything looks fine.

I think the synchronize_rcu is needed, as explanation of
Documentation/RCU/whatisRCU.txt the RCU update sequence is:

So the typical RCU update sequence goes something like the following:

a.      Remove pointers to a data structure, so that subsequent
        readers cannot gain a reference to it.

b.      Wait for all previous readers to complete their RCU read-side
        critical sections.

c.      At this point, there cannot be any readers who hold references
        to the data structure, so it now may safely be reclaimed
        (e.g., kfree()d).


To achieve the step b, synchronize_rcu() or call_rcu() or kfree_rcu are needed,
since this removing happens little, I add a synchronize_rcu(), to
avoid to add the
rcu_head to struct adapter or to change codes largely.

-Roy

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH][RFC] cxgb4: Not need to hold the adap_rcu_lock lock when read adap_rcu_list
  2014-06-23 22:35   ` Casey Leedom
  2014-06-24  0:20     ` Li RongQing
@ 2014-06-24  6:38     ` Eric Dumazet
  2014-06-24 16:42       ` Casey Leedom
  1 sibling, 1 reply; 8+ messages in thread
From: Eric Dumazet @ 2014-06-24  6:38 UTC (permalink / raw)
  To: Casey Leedom; +Cc: David Miller, rongqing.li, netdev, hariprasad, greearb

On Mon, 2014-06-23 at 15:35 -0700, Casey Leedom wrote:
> On 06/23/14 14:50, David Miller wrote:
> > From: <rongqing.li@windriver.com>
> > Date: Fri, 20 Jun 2014 17:32:36 +0800
> >
> >> cxgb4_netdev maybe lead to dead lock, since it uses a spin lock, and be called
> >> in both thread and softirq context, but not disable BH, the lockdep report is
> >> below; In fact, cxgb4_netdev only reads adap_rcu_list with RCU protection, so
> >> not need to hold spin lock again.
> > I think this change is fine, and correct, but I would like to see some
> > reviews from the cxgb4 maintainers.
> 
>    Thanks David.  Hari is gone on PTO so I think I'm the next logical 
> person ... :-)
> 
>    I've gone back and reviewed the original patch, Eric Dumazet6's reply 
> and revised patch and compared that against this proposed patch.  Li 
> RongQing is submitting the same patch that Eric suggested with the 
> addition of a call to synchronize_rcu() the in driver remove() 
> function.  I'm not super familiar with the RCU system but that addition 
> certainly seems innocuous enough.  Other than that, everything looks fine.

Yes, the synchronize_rcu() is needed.

In fact I do not really understand why RCU was even used in this slow
path...

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH][RFC] cxgb4: Not need to hold the adap_rcu_lock lock when read adap_rcu_list
  2014-06-24  6:38     ` Eric Dumazet
@ 2014-06-24 16:42       ` Casey Leedom
  0 siblings, 0 replies; 8+ messages in thread
From: Casey Leedom @ 2014-06-24 16:42 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: David Miller, rongqing.li, netdev, hariprasad, greearb

   Okay, so it looks like everyone is happy with Li's patch.  So I think 
I'm supposed to say:

   Acked-by: Casey Leedom <leedom@chelsio.com>

Casey

On 06/23/14 23:38, Eric Dumazet wrote:
> On Mon, 2014-06-23 at 15:35 -0700, Casey Leedom wrote:
>> On 06/23/14 14:50, David Miller wrote:
>>> From: <rongqing.li@windriver.com>
>>> Date: Fri, 20 Jun 2014 17:32:36 +0800
>>>
>>>> cxgb4_netdev maybe lead to dead lock, since it uses a spin lock, and be called
>>>> in both thread and softirq context, but not disable BH, the lockdep report is
>>>> below; In fact, cxgb4_netdev only reads adap_rcu_list with RCU protection, so
>>>> not need to hold spin lock again.
>>> I think this change is fine, and correct, but I would like to see some
>>> reviews from the cxgb4 maintainers.
>>     Thanks David.  Hari is gone on PTO so I think I'm the next logical
>> person ... :-)
>>
>>     I've gone back and reviewed the original patch, Eric Dumazet6's reply
>> and revised patch and compared that against this proposed patch.  Li
>> RongQing is submitting the same patch that Eric suggested with the
>> addition of a call to synchronize_rcu() the in driver remove()
>> function.  I'm not super familiar with the RCU system but that addition
>> certainly seems innocuous enough.  Other than that, everything looks fine.
> Yes, the synchronize_rcu() is needed.
>
> In fact I do not really understand why RCU was even used in this slow
> path...
>
>
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH][RFC] cxgb4: Not need to hold the adap_rcu_lock lock when read adap_rcu_list
  2014-06-20  9:32 [PATCH][RFC] cxgb4: Not need to hold the adap_rcu_lock lock when read adap_rcu_list rongqing.li
  2014-06-23 21:50 ` David Miller
@ 2014-06-24 22:54 ` David Miller
  2014-06-24 22:56   ` Casey Leedom
  1 sibling, 1 reply; 8+ messages in thread
From: David Miller @ 2014-06-24 22:54 UTC (permalink / raw)
  To: rongqing.li; +Cc: netdev, eric.dumazet, hariprasad, leedom, greearb

From: <rongqing.li@windriver.com>
Date: Fri, 20 Jun 2014 17:32:36 +0800

> From: Li RongQing <roy.qing.li@gmail.com> 
> 
> cxgb4_netdev maybe lead to dead lock, since it uses a spin lock, and be called
> in both thread and softirq context, but not disable BH, the lockdep report is
> below; In fact, cxgb4_netdev only reads adap_rcu_list with RCU protection, so
> not need to hold spin lock again.
 ...
> Reported-by: Ben Greear <greearb@candelatech.com>
> Cc: Casey Leedom <leedom@chelsio.com>
> Cc: Hariprasad Shenai <hariprasad@chelsio.com>
> Signed-off-by: Li RongQing <roy.qing.li@gmail.com>
> Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>

Applied, thanks.

Casey please _do not_ put indentation in front of your Acked-by:,
automated tools like patchwork will not pick it up if you do that.

I'm actually very surprised how much spurious indentation and weird
spacing shows up in some people's postings.  You folks definitely are
using graphical tools to type this stuff in, rather than real pure
text editors.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH][RFC] cxgb4: Not need to hold the adap_rcu_lock lock when read adap_rcu_list
  2014-06-24 22:54 ` David Miller
@ 2014-06-24 22:56   ` Casey Leedom
  0 siblings, 0 replies; 8+ messages in thread
From: Casey Leedom @ 2014-06-24 22:56 UTC (permalink / raw)
  To: David Miller; +Cc: rongqing.li, netdev, eric.dumazet, hariprasad, greearb


On Jun 24, 2014, at 3:54 PM, David Miller <davem@davemloft.net> wrote:

> From: <rongqing.li@windriver.com>
> Date: Fri, 20 Jun 2014 17:32:36 +0800
> 
>> From: Li RongQing <roy.qing.li@gmail.com> 
>> 
>> cxgb4_netdev maybe lead to dead lock, since it uses a spin lock, and be called
>> in both thread and softirq context, but not disable BH, the lockdep report is
>> below; In fact, cxgb4_netdev only reads adap_rcu_list with RCU protection, so
>> not need to hold spin lock again.
> ...
>> Reported-by: Ben Greear <greearb@candelatech.com>
>> Cc: Casey Leedom <leedom@chelsio.com>
>> Cc: Hariprasad Shenai <hariprasad@chelsio.com>
>> Signed-off-by: Li RongQing <roy.qing.li@gmail.com>
>> Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
> 
> Applied, thanks.
> 
> Casey please _do not_ put indentation in front of your Acked-by:,
> automated tools like patchwork will not pick it up if you do that.
> 
> I'm actually very surprised how much spurious indentation and weird
> spacing shows up in some people's postings.  You folks definitely are
> using graphical tools to type this stuff in, rather than real pure
> text editors.

  Sorry, that was 100% my fault.  I actually typed it in that way because I didn’t know.  My Bad and I’ll do better next time.

Casey

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2014-06-24 22:56 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-06-20  9:32 [PATCH][RFC] cxgb4: Not need to hold the adap_rcu_lock lock when read adap_rcu_list rongqing.li
2014-06-23 21:50 ` David Miller
2014-06-23 22:35   ` Casey Leedom
2014-06-24  0:20     ` Li RongQing
2014-06-24  6:38     ` Eric Dumazet
2014-06-24 16:42       ` Casey Leedom
2014-06-24 22:54 ` David Miller
2014-06-24 22:56   ` Casey Leedom

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).