From: Jakub Kicinski <kuba@kernel.org>
To: daniel@makrotopia.org
Cc: Jakub Kicinski <kuba@kernel.org>,
chester.a.unal@arinc9.com, andrew@lunn.ch, olteanv@gmail.com,
davem@davemloft.net, edumazet@google.com, pabeni@redhat.com,
matthias.bgg@gmail.com, angelogioacchino.delregno@collabora.com,
linux@armlinux.org.uk, ansuelsmth@gmail.com,
netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-arm-kernel@lists.infradead.org,
linux-mediatek@lists.infradead.org, frank-w@public-files.de,
john@phrozen.org
Subject: Re: [PATCH net] net: dsa: mt7530: fix .get_stats64 sleeping in atomic context
Date: Sat, 18 Apr 2026 11:01:00 -0700 [thread overview]
Message-ID: <20260418180100.131392-1-kuba@kernel.org> (raw)
In-Reply-To: <79dc0ec5b6be698b14cb66339d6f63033ca2934a.1776397542.git.daniel@makrotopia.org>
This is an AI-generated review of your patch. The human sending this
email has considered the AI review valid, or at least plausible.
---
Subject: net: dsa: mt7530: fix .get_stats64 sleeping in atomic context
This commit addresses a sleep-in-atomic bug when reading MDIO-connected
switch port statistics by delegating the register reads to a delayed
workqueue. The get_stats64 callback now returns cached values protected
by a spinlock instead of synchronously reading from the MDIO bus.
MMIO-connected switches are unaffected and continue to read counters
directly.
> diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c
> index b9423389c2ef0..786d3a8492bcb 100644
> --- a/drivers/net/dsa/mt7530.c
> +++ b/drivers/net/dsa/mt7530.c
[ ... ]
> @@ -951,6 +952,43 @@ static void mt7530_get_stats64(struct dsa_switch *ds, int port,
> &storage->rx_crc_errors);
> }
>
> +static void mt7530_stats_poll(struct work_struct *work)
> +{
> + struct mt7530_priv *priv = container_of(work, struct mt7530_priv,
> + stats_work.work);
> + struct rtnl_link_stats64 stats = {};
> + struct dsa_port *dp;
> + int port;
> +
> + dsa_switch_for_each_user_port(dp, priv->ds) {
> + port = dp->index;
> +
> + mt7530_read_port_stats64(priv, port, &stats);
> +
> + spin_lock(&priv->stats_lock);
Are we at risk of a deadlock here?
The get_stats64 callback can be called by dev_get_stats(), which runs in
softirq context.
If this delayed work runs in process context and is preempted by a softirq
that attempts to acquire the lock via dev_get_stats(), a deadlock could
occur. Both paths should likely use spin_lock_bh() and spin_unlock_bh().
> + priv->ports[port].stats = stats;
> + spin_unlock(&priv->stats_lock);
> + }
> +
> + schedule_delayed_work(&priv->stats_work,
> + MT7530_STATS_POLL_INTERVAL);
> +}
> +
> +static void mt7530_get_stats64(struct dsa_switch *ds, int port,
> + struct rtnl_link_stats64 *storage)
> +{
> + struct mt7530_priv *priv = ds->priv;
> +
> + if (priv->bus) {
> + spin_lock(&priv->stats_lock);
> + *storage = priv->ports[port].stats;
> + spin_unlock(&priv->stats_lock);
> + mod_delayed_work(system_wq, &priv->stats_work, 0);
Could this unconditional background refresh cause a denial of service on
the MDIO bus?
If an unprivileged user continually reads network statistics in a tight
loop, mt7530_stats_poll() will execute continuously, potentially starving
other MDIO tasks like PHY link-state monitoring.
Additionally, is it possible for this call to re-arm the workqueue during
teardown?
In mt7530_remove_common(), cancel_delayed_work_sync() is called before
dsa_unregister_switch(). If get_stats64() is called during or after
dsa_unregister_switch(), it would re-arm the workqueue after it was
explicitly cancelled, potentially causing a use-after-free when priv is
freed.
> + } else {
> + mt7530_read_port_stats64(priv, port, storage);
> + }
> +}
[ ... ]
> @@ -3137,6 +3175,13 @@ mt753x_setup(struct dsa_switch *ds)
> if (ret && priv->irq_domain)
> mt7530_free_mdio_irq(priv);
>
> + if (!ret && priv->bus) {
> + spin_lock_init(&priv->stats_lock);
> + INIT_DELAYED_WORK(&priv->stats_work, mt7530_stats_poll);
> + schedule_delayed_work(&priv->stats_work,
> + MT7530_STATS_POLL_INTERVAL);
If dsa_register_switch() encounters an error after this setup callback
completes successfully, the DSA core will attempt to clean up.
Since mt7530_switch_ops does not implement a teardown callback to cancel
this delayed work, will the work remain scheduled and eventually
dereference freed priv memory once the driver core frees it via devres?
> + }
> +
> return ret;
> }
--
pw-bot: cr
prev parent reply other threads:[~2026-04-18 18:01 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-17 3:55 [PATCH net] net: dsa: mt7530: fix .get_stats64 sleeping in atomic context Daniel Golle
2026-04-17 7:35 ` Chester A. Unal
2026-04-17 12:08 ` Daniel Golle
2026-04-17 12:06 ` Andrew Lunn
2026-04-17 17:46 ` Breno Leitao
2026-04-17 18:03 ` Daniel Golle
2026-04-18 18:01 ` Jakub Kicinski [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260418180100.131392-1-kuba@kernel.org \
--to=kuba@kernel.org \
--cc=andrew@lunn.ch \
--cc=angelogioacchino.delregno@collabora.com \
--cc=ansuelsmth@gmail.com \
--cc=chester.a.unal@arinc9.com \
--cc=daniel@makrotopia.org \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=frank-w@public-files.de \
--cc=john@phrozen.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mediatek@lists.infradead.org \
--cc=linux@armlinux.org.uk \
--cc=matthias.bgg@gmail.com \
--cc=netdev@vger.kernel.org \
--cc=olteanv@gmail.com \
--cc=pabeni@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox