Netdev List
 help / color / mirror / Atom feed
From: Joe Damato <jdamato@fastly.com>
To: Guenter Roeck <linux@roeck-us.net>
Cc: netdev@vger.kernel.org, mkarsten@uwaterloo.ca,
	skhawaja@google.com, sdf@fomichev.me, bjorn@rivosinc.com,
	amritha.nambiar@intel.com, sridhar.samudrala@intel.com,
	willemdebruijn.kernel@gmail.com, edumazet@google.com,
	Jakub Kicinski <kuba@kernel.org>,
	"David S. Miller" <davem@davemloft.net>,
	Paolo Abeni <pabeni@redhat.com>, Jonathan Corbet <corbet@lwn.net>,
	Jiri Pirko <jiri@resnulli.us>,
	Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
	Lorenzo Bianconi <lorenzo@kernel.org>,
	Johannes Berg <johannes.berg@intel.com>,
	"open list:DOCUMENTATION" <linux-doc@vger.kernel.org>,
	open list <linux-kernel@vger.kernel.org>,
	pcnet32@frontier.com
Subject: Re: [net-next v6 5/9] net: napi: Add napi_config
Date: Wed, 27 Nov 2024 10:51:16 -0800	[thread overview]
Message-ID: <Z0dqJNnlcIrvLuV6@LQ3V64L9R2> (raw)
In-Reply-To: <85dd4590-ea6b-427d-876a-1d8559c7ad82@roeck-us.net>

On Wed, Nov 27, 2024 at 09:43:54AM -0800, Guenter Roeck wrote:
> Hi,
> 
> On Fri, Oct 11, 2024 at 06:45:00PM +0000, Joe Damato wrote:
> > Add a persistent NAPI config area for NAPI configuration to the core.
> > Drivers opt-in to setting the persistent config for a NAPI by passing an
> > index when calling netif_napi_add_config.
> > 
> > napi_config is allocated in alloc_netdev_mqs, freed in free_netdev
> > (after the NAPIs are deleted).
> > 
> > Drivers which call netif_napi_add_config will have persistent per-NAPI
> > settings: NAPI IDs, gro_flush_timeout, and defer_hard_irq settings.
> > 
> > Per-NAPI settings are saved in napi_disable and restored in napi_enable.
> > 
> > Co-developed-by: Martin Karsten <mkarsten@uwaterloo.ca>
> > Signed-off-by: Martin Karsten <mkarsten@uwaterloo.ca>
> > Signed-off-by: Joe Damato <jdamato@fastly.com>
> > Reviewed-by: Jakub Kicinski <kuba@kernel.org>
> 
> This patch triggers a lock inversion message on pcnet Ethernet adapters.

Thanks for the report. I am not familiar with the pcnet driver, but
took some time now to read the report below and the driver code.

I could definitely be reading the output incorrectly (if so please
let me know), but it seems like the issue can be triggered in this
case:

CPU 0:
pcnet32_open
   lock(lp->lock)
     napi_enable
       napi_hash_add
         lock(napi_hash_lock)
         unlock(napi_hash_lock)
   unlock(lp->lock)


Meanwhile on CPU 1:
  pcnet32_close
    napi_disable
      napi_hash_del
        lock(napi_hash_lock)
        unlock(napi_hash_lock)
    lock(lp->lock)
    [... other code ...]
    unlock(lp->lock)
    [... other code ...]
    lock(lp->lock)
    [... other code ...]
    unlock(lp->lock)

In other words: while the close path is holding napi_hash_lock (and
before it acquires lp->lock), the enable path takes lp->lock and
then napi_hash_lock.

It seems this was triggered because before the identified commit,
napi_enable did not call napi_hash_add (and thus did not take the
napi_hash_lock).

So, I agree there is an inversion; I can't say for sure if this
would cause a deadlock in certain situations. It seems like
napi_hash_del in the close path will return, so the inversion
doesn't seem like it'd lead to a deadlock, but I am not an expert in
this and could certainly be wrong.

I wonder if a potential fix for this would be in the driver's close
function? 

In pcnet32_open the order is:
  lock(lp->lock)
    napi_enable
    netif_start_queue
    mod_timer(watchdog)
  unlock(lp->lock)

Perhaps pcnet32_close should be the same?

I've included an example patch below for pcnet32_close and I've CC'd
the maintainer of pcnet32 that is not currently CC'd.

Guenter: Is there any change you might be able to test the proposed
patch below?

Don: Would you mind taking a look to see if this change is sensible?

Netdev maintainers: at a higher level, I'm not sure how many other
drivers might have locking patterns like this that commit
86e25f40aa1e ("net: napi: Add napi_config") will break in a similar
manner. 

Do I:
  - comb through drivers trying to identify these, and/or
  - do we find a way to implement the identified commit with the
    original lock ordering to avoid breaking any other driver?

I'd appreciate guidance/insight from the maintainers on how to best
proceed.

diff --git a/drivers/net/ethernet/amd/pcnet32.c b/drivers/net/ethernet/amd/pcnet32.c
index 72db9f9e7bee..ff56a308fec9 100644
--- a/drivers/net/ethernet/amd/pcnet32.c
+++ b/drivers/net/ethernet/amd/pcnet32.c
@@ -2623,13 +2623,13 @@ static int pcnet32_close(struct net_device *dev)
        struct pcnet32_private *lp = netdev_priv(dev);
        unsigned long flags;

+       spin_lock_irqsave(&lp->lock, flags);
+
        del_timer_sync(&lp->watchdog_timer);

        netif_stop_queue(dev);
        napi_disable(&lp->napi);

-       spin_lock_irqsave(&lp->lock, flags);
-
        dev->stats.rx_missed_errors = lp->a->read_csr(ioaddr, 112);

        netif_printk(lp, ifdown, KERN_DEBUG, dev,

  reply	other threads:[~2024-11-27 18:51 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-10-11 18:44 [net-next v6 0/9] Add support for per-NAPI config via netlink Joe Damato
2024-10-11 18:44 ` [net-next v6 1/9] net: napi: Make napi_defer_hard_irqs per-NAPI Joe Damato
2024-10-11 18:44 ` [net-next v6 2/9] netdev-genl: Dump napi_defer_hard_irqs Joe Damato
2024-10-11 18:44 ` [net-next v6 3/9] net: napi: Make gro_flush_timeout per-NAPI Joe Damato
2024-10-11 18:44 ` [net-next v6 4/9] netdev-genl: Dump gro_flush_timeout Joe Damato
2024-10-11 18:47   ` Eric Dumazet
2024-10-11 18:45 ` [net-next v6 5/9] net: napi: Add napi_config Joe Damato
2024-10-11 18:49   ` Eric Dumazet
2024-11-27 17:43   ` Guenter Roeck
2024-11-27 18:51     ` Joe Damato [this message]
2024-11-27 20:00       ` Joe Damato
2024-11-27 22:48         ` Guenter Roeck
2024-11-30 20:45         ` Jakub Kicinski
2024-12-02 17:34           ` Joe Damato
2024-11-27 21:43       ` Guenter Roeck
2024-11-28  1:17         ` Joe Damato
2024-11-28  1:34           ` Guenter Roeck
2024-10-11 18:45 ` [net-next v6 6/9] netdev-genl: Support setting per-NAPI config values Joe Damato
2024-10-11 18:49   ` Eric Dumazet
2024-11-12  9:17   ` Paolo Abeni
2024-11-12 17:12     ` Joe Damato
2024-10-11 18:45 ` [net-next v6 7/9] bnxt: Add support for persistent NAPI config Joe Damato
2024-10-11 18:45 ` [net-next v6 8/9] mlx5: " Joe Damato
2024-10-11 18:45 ` [net-next v6 9/9] mlx4: Add support for persistent NAPI config to RX CQs Joe Damato
2024-10-15  1:10 ` [net-next v6 0/9] Add support for per-NAPI config via netlink patchwork-bot+netdevbpf

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Z0dqJNnlcIrvLuV6@LQ3V64L9R2 \
    --to=jdamato@fastly.com \
    --cc=amritha.nambiar@intel.com \
    --cc=bigeasy@linutronix.de \
    --cc=bjorn@rivosinc.com \
    --cc=corbet@lwn.net \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=jiri@resnulli.us \
    --cc=johannes.berg@intel.com \
    --cc=kuba@kernel.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux@roeck-us.net \
    --cc=lorenzo@kernel.org \
    --cc=mkarsten@uwaterloo.ca \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=pcnet32@frontier.com \
    --cc=sdf@fomichev.me \
    --cc=skhawaja@google.com \
    --cc=sridhar.samudrala@intel.com \
    --cc=willemdebruijn.kernel@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox