From: Giuseppe CAVALLARO <peppe.cavallaro@st.com>
To: David Miller <davem@davemloft.net>
Cc: netdev@vger.kernel.org, bhutchings@solarflare.com
Subject: Re: [net-next.git 3/8 (V2)] stmmac: add the initial tx coalesce schema
Date: Fri, 14 Sep 2012 09:36:30 +0200 [thread overview]
Message-ID: <5052DE7E.8070704@st.com> (raw)
In-Reply-To: <20120913.162333.1518469374321928795.davem@davemloft.net>
On 9/13/2012 10:23 PM, David Miller wrote:
> From: Giuseppe CAVALLARO <peppe.cavallaro@st.com>
> Date: Tue, 11 Sep 2012 08:55:09 +0200
>
>> + unsigned long flags;
>> +
>> + spin_lock_irqsave(&priv->tx_lock, flags);
>>
>> - spin_lock(&priv->tx_lock);
>> + priv->xstats.tx_clean++;
>
> You are changing the locking here for the sake of the new timer.
>
> But timers run in software interrupt context, so this change is
> completely unnecessary since NAPI runs in software interrupt context
> as well, and neither timers nor NAPI run in hardware interrupts
> context.
Indeed It can be called by the ISR too in this new implementation.
I have added the spin_lock_irqsave/restore otherwise, testing with
CONFIG_PROVE_LOOKING, I get the following warning on ARM SMP.
[ 8.030000]
[ 8.030000] =================================
[ 8.030000] [ INFO: inconsistent lock state ]
[ 8.030000] 3.4.7_stm24_0302-b2000+ #103 Not tainted
[ 8.030000] ---------------------------------
[ 8.030000] inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-W} usage.
[ 8.030000] swapper/0/1 [HC1[1]:SC0[0]:HE0:SE1] takes:
[ 8.030000] (&(&priv->tx_lock)->rlock){?.-...}, at: [<802651d8>]
stmmac_tx+0x1c/0x388
[ 8.030000] {HARDIRQ-ON-W} state was registered at:
[ 8.030000] [<800562b4>] __lock_acquire+0x638/0x179c
[ 8.030000] [<80057884>] lock_acquire+0x60/0x74
[ 8.030000] [<80428a08>] _raw_spin_lock+0x40/0x50
[ 8.030000] [<802651d8>] stmmac_tx+0x1c/0x388
[ 8.030000] [<80026be0>] run_timer_softirq+0x180/0x23c
[ 8.030000] [<80020ccc>] __do_softirq+0xa0/0x114
[ 8.030000] [<80021204>] irq_exit+0x58/0x7c
[ 8.030000] [<8000dc80>] handle_IRQ+0x7c/0xb8
[ 8.030000] [<80008464>] gic_handle_irq+0x34/0x58
[ 8.030000] [<80429684>] __irq_svc+0x44/0x78
[ 8.030000] [<8001c3f4>] vprintk+0x41c/0x480
[ 8.030000] [<8042097c>] printk+0x18/0x24
[ 8.030000] [<805aef6c>] prepare_namespace+0x1c/0x1a4
[ 8.030000] [<805ae980>] kernel_init+0x1c8/0x20c
[ 8.030000] [<8000deb8>] kernel_thread_exit+0x0/0x8
[ 8.030000] irq event stamp: 254745
[ 8.030000] hardirqs last enabled at (254744): [<80429240>]
_raw_spin_unlock_irqrestore+0x3c/0x6c
[ 8.030000] hardirqs last disabled at (254745): [<80429674>]
__irq_svc+0x34/0x78
[ 8.030000] softirqs last enabled at (254741): [<8035d964>]
dev_queue_xmit+0x6a4/0x724
[ 8.030000] softirqs last disabled at (254737): [<8035d2d4>]
dev_queue_xmit+0x14/0x724
[ 8.030000]
[ 8.030000] other info that might help us debug this:
[ 8.030000] Possible unsafe locking scenario:
[ 8.030000]
[ 8.030000] CPU0
[ 8.030000] ----
[ 8.030000] lock(&(&priv->tx_lock)->rlock);
[ 8.030000] <Interrupt>
[ 8.030000] lock(&(&priv->tx_lock)->rlock);
[ 8.030000]
[ 8.030000] *** DEADLOCK ***
> Therefore, disabling hardware interrupts for this lock is unnecessary
> and will decrease performance.
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
next prev parent reply other threads:[~2012-09-14 7:36 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-09-11 6:55 [net-next.git 0/8 (V4)] stmmac: remove dead code for STMMAC_TIMER and add new mitigation schema Giuseppe CAVALLARO
2012-09-11 6:55 ` [net-next.git 1/8] stmmac: remove dead code for TIMER Giuseppe CAVALLARO
2012-09-11 6:55 ` [net-next.git 2/8 (V2)] stmmac: manage tx clean out of rx_poll Giuseppe CAVALLARO
2012-09-11 6:55 ` [net-next.git 3/8 (V2)] stmmac: add the initial tx coalesce schema Giuseppe CAVALLARO
2012-09-13 20:23 ` David Miller
2012-09-13 20:42 ` Ben Hutchings
2012-09-13 20:46 ` David Miller
2012-09-13 21:10 ` Ben Hutchings
2012-09-13 21:37 ` David Miller
2012-09-13 23:11 ` Ben Hutchings
2012-09-14 7:36 ` Giuseppe CAVALLARO [this message]
2012-09-18 5:41 ` Giuseppe CAVALLARO
2012-10-04 10:06 ` Giuseppe CAVALLARO
2012-09-11 6:55 ` [net-next.git 4/8 (V3)] stmmac: add Rx watchdog support to mitigate the DMA irqs Giuseppe CAVALLARO
2012-09-11 6:55 ` [net-next.git 5/8 (V2)] stmmac: get/set coalesce parameters via ethtool Giuseppe CAVALLARO
2012-09-11 18:14 ` Ben Hutchings
2012-09-11 6:55 ` [net-next.git 6/8] stmmac: fix and review the rx irq path after adding new mitigation Giuseppe CAVALLARO
2012-09-11 6:55 ` [net-next.git 7/8] stmmac: update the doc with new IRQ mitigation Giuseppe CAVALLARO
2012-09-11 6:55 ` [net-next.git 8/8] stmmac: update the driver version to Sept_2012 Giuseppe CAVALLARO
-- strict thread matches above, loose matches on Subject: below --
2012-09-10 7:38 [net-next.git 0/8 (V3)] stmmac: remove dead code for STMMAC_TIMER and add new mitigation schema Giuseppe CAVALLARO
2012-09-10 7:38 ` [net-next.git 3/8 (V2)] stmmac: add the initial tx coalesce schema Giuseppe CAVALLARO
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5052DE7E.8070704@st.com \
--to=peppe.cavallaro@st.com \
--cc=bhutchings@solarflare.com \
--cc=davem@davemloft.net \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).