From: Joe Damato <jdamato@fastly.com>
To: Guenter Roeck <linux@roeck-us.net>
Cc: netdev@vger.kernel.org, mkarsten@uwaterloo.ca,
skhawaja@google.com, sdf@fomichev.me, bjorn@rivosinc.com,
amritha.nambiar@intel.com, sridhar.samudrala@intel.com,
willemdebruijn.kernel@gmail.com, edumazet@google.com,
Jakub Kicinski <kuba@kernel.org>,
"David S. Miller" <davem@davemloft.net>,
Paolo Abeni <pabeni@redhat.com>, Jonathan Corbet <corbet@lwn.net>,
Jiri Pirko <jiri@resnulli.us>,
Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
Lorenzo Bianconi <lorenzo@kernel.org>,
Johannes Berg <johannes.berg@intel.com>,
"open list:DOCUMENTATION" <linux-doc@vger.kernel.org>,
open list <linux-kernel@vger.kernel.org>,
pcnet32@frontier.com
Subject: Re: [net-next v6 5/9] net: napi: Add napi_config
Date: Wed, 27 Nov 2024 17:17:15 -0800 [thread overview]
Message-ID: <Z0fEm2EmZ6q1c9Mu@LQ3V64L9R2> (raw)
In-Reply-To: <f04406c5-f805-4de3-8a7c-abfdfd91a501@roeck-us.net>
On Wed, Nov 27, 2024 at 01:43:14PM -0800, Guenter Roeck wrote:
> On 11/27/24 10:51, Joe Damato wrote:
> > On Wed, Nov 27, 2024 at 09:43:54AM -0800, Guenter Roeck wrote:
> > > Hi,
> > >
> > > On Fri, Oct 11, 2024 at 06:45:00PM +0000, Joe Damato wrote:
[...]
> > It seems this was triggered because before the identified commit,
> > napi_enable did not call napi_hash_add (and thus did not take the
> > napi_hash_lock).
> >
> > So, I agree there is an inversion; I can't say for sure if this
> > would cause a deadlock in certain situations. It seems like
> > napi_hash_del in the close path will return, so the inversion
> > doesn't seem like it'd lead to a deadlock, but I am not an expert in
> > this and could certainly be wrong.
> >
> > I wonder if a potential fix for this would be in the driver's close
> > function?
> >
> > In pcnet32_open the order is:
> > lock(lp->lock)
> > napi_enable
> > netif_start_queue
> > mod_timer(watchdog)
> > unlock(lp->lock)
> >
> > Perhaps pcnet32_close should be the same?
> >
> > I've included an example patch below for pcnet32_close and I've CC'd
> > the maintainer of pcnet32 that is not currently CC'd.
> >
> > Guenter: Is there any change you might be able to test the proposed
> > patch below?
> >
>
> I moved the spinlock after del_timer_sync() because it is not a good idea
> to hold a spinlock when calling that function. That results in:
>
> [ 10.646956] BUG: sleeping function called from invalid context at net/core/dev.c:6775
> [ 10.647142] in_atomic(): 1, irqs_disabled(): 1, non_block: 0, pid: 1817, name: ip
> [ 10.647237] preempt_count: 1, expected: 0
> [ 10.647319] 2 locks held by ip/1817:
> [ 10.647383] #0: ffffffff81ded990 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x22a/0x74c
> [ 10.647880] #1: ff6000000498ccb0 (&lp->lock){-.-.}-{3:3}, at: pcnet32_close+0x40/0x126
[...]
> This is due to might_sleep() at the beginning of napi_disable(). So it doesn't
> work as intended, it just replaces one problem with another.
Thanks for testing that. And you are right, it is also not correct.
I will give it some thought to see if I can think of something
better.
Maybe Don will have some ideas.
> > Don: Would you mind taking a look to see if this change is sensible?
> >
> > Netdev maintainers: at a higher level, I'm not sure how many other
> > drivers might have locking patterns like this that commit
> > 86e25f40aa1e ("net: napi: Add napi_config") will break in a similar
> > manner.
> >
> > Do I:
> > - comb through drivers trying to identify these, and/or
>
> Coccinelle, checking for napi_enable calls under spinlock, points to:
>
> napi_enable called under spin_lock_irqsave from drivers/net/ethernet/via/via-velocity.c:2325
> napi_enable called under spin_lock_irqsave from drivers/net/can/grcan.c:1076
> napi_enable called under spin_lock from drivers/net/ethernet/marvell/mvneta.c:4388
> napi_enable called under spin_lock_irqsave from drivers/net/ethernet/amd/pcnet32.c:2104
I checked the 3 cases above other than pcnet32 and they appear to be
false positives to me.
Guenter: would you mind sending me your cocci script? Mostly for
selfish reasons; I'd like to see how you did it so I can learn more
:) Feel free to do so off list if you prefer.
I tried to write my first coccinelle script (which you can find
below) that is probably wrong, but it attempts to detect:
- interrupt routines that hold locks
- in drivers that call napi_enable between a lock/unlock
I couldn't figure out how to get regexps to work in my script, so I
made a couple variants of the script for each of the spin_lock_*
variants and ran them all.
Only one offender was detected: pcnet32.
So, I guess the question to put out there to maintainers / others on
the list is:
- There seems to be at least 1 driver affected (pcnet32). There
might be others, but my simple (and likely incorrect) cocci
script below couldn't detect any with this particular bug shape.
Worth mentioning: there could be other bug shapes that trigger
an inversion that I am currently unaware of.
- As far as I can tell, there are three ways to proceed:
1. Find and fix all drivers which broke (pcnet32 being the only
known driver at this point), or
2. Disable IRQs when taking the lock in napi_hash_del, or
3. Move the napi hash add/remove out of napi enable/disable.
Happy to proceed however seems most reasonable to the maintainers,
please let me know.
My cocci script follows; as noted above I am too much of a noob and
couldn't figure out how to use regexps to match the different
spin_lock* variants, so I simply made multiple versions of this
script for each variant:
virtual report
@napi@
identifier func0;
position p0;
@@
func0(...)
{
...
spin_lock_irqsave(...)
...
napi_enable@p0(...)
...
spin_unlock_irqrestore(...)
...
}
@u@
position p;
identifier func;
typedef irqreturn_t;
@@
irqreturn_t func (...)
{
...
spin_lock@p(...)
...
}
@script:python depends on napi && u@
p << u.p;
func << u.func;
disable << napi.p0;
@@
print("* file: %s irq handler %s takes lock on line %s and calls napi_enable under lock %s" % (p[0].file,func,p[0].line,disable[0].line))
next prev parent reply other threads:[~2024-11-28 1:17 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-10-11 18:44 [net-next v6 0/9] Add support for per-NAPI config via netlink Joe Damato
2024-10-11 18:44 ` [net-next v6 1/9] net: napi: Make napi_defer_hard_irqs per-NAPI Joe Damato
2024-10-11 18:44 ` [net-next v6 2/9] netdev-genl: Dump napi_defer_hard_irqs Joe Damato
2024-10-11 18:44 ` [net-next v6 3/9] net: napi: Make gro_flush_timeout per-NAPI Joe Damato
2024-10-11 18:44 ` [net-next v6 4/9] netdev-genl: Dump gro_flush_timeout Joe Damato
2024-10-11 18:47 ` Eric Dumazet
2024-10-11 18:45 ` [net-next v6 5/9] net: napi: Add napi_config Joe Damato
2024-10-11 18:49 ` Eric Dumazet
2024-11-27 17:43 ` Guenter Roeck
2024-11-27 18:51 ` Joe Damato
2024-11-27 20:00 ` Joe Damato
2024-11-27 22:48 ` Guenter Roeck
2024-11-30 20:45 ` Jakub Kicinski
2024-12-02 17:34 ` Joe Damato
2024-11-27 21:43 ` Guenter Roeck
2024-11-28 1:17 ` Joe Damato [this message]
2024-11-28 1:34 ` Guenter Roeck
2024-10-11 18:45 ` [net-next v6 6/9] netdev-genl: Support setting per-NAPI config values Joe Damato
2024-10-11 18:49 ` Eric Dumazet
2024-11-12 9:17 ` Paolo Abeni
2024-11-12 17:12 ` Joe Damato
2024-10-11 18:45 ` [net-next v6 7/9] bnxt: Add support for persistent NAPI config Joe Damato
2024-10-11 18:45 ` [net-next v6 8/9] mlx5: " Joe Damato
2024-10-11 18:45 ` [net-next v6 9/9] mlx4: Add support for persistent NAPI config to RX CQs Joe Damato
2024-10-15 1:10 ` [net-next v6 0/9] Add support for per-NAPI config via netlink patchwork-bot+netdevbpf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Z0fEm2EmZ6q1c9Mu@LQ3V64L9R2 \
--to=jdamato@fastly.com \
--cc=amritha.nambiar@intel.com \
--cc=bigeasy@linutronix.de \
--cc=bjorn@rivosinc.com \
--cc=corbet@lwn.net \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=jiri@resnulli.us \
--cc=johannes.berg@intel.com \
--cc=kuba@kernel.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux@roeck-us.net \
--cc=lorenzo@kernel.org \
--cc=mkarsten@uwaterloo.ca \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=pcnet32@frontier.com \
--cc=sdf@fomichev.me \
--cc=skhawaja@google.com \
--cc=sridhar.samudrala@intel.com \
--cc=willemdebruijn.kernel@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox