* [PATCH net v5] net: dsa: mt7530: fix .get_stats64 sleeping in atomic context
@ 2026-04-30 23:13 Daniel Golle
2026-05-02 0:15 ` Jakub Kicinski
0 siblings, 1 reply; 2+ messages in thread
From: Daniel Golle @ 2026-04-30 23:13 UTC (permalink / raw)
To: Chester A. Unal, Daniel Golle, Andrew Lunn, Vladimir Oltean,
David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
Matthias Brugger, AngeloGioacchino Del Regno, Russell King,
Christian Marangi, netdev, linux-kernel, linux-arm-kernel,
linux-mediatek
The .get_stats64 callback runs in atomic context, but on
MDIO-connected switches every register read acquires the MDIO bus
mutex, which can sleep:
[ 12.645973] BUG: sleeping function called from invalid context at kernel/locking/mutex.c:609
[ 12.654442] in_atomic(): 0, irqs_disabled(): 0, non_block: 0, pid: 759, name: grep
[ 12.663377] preempt_count: 0, expected: 0
[ 12.667410] RCU nest depth: 1, expected: 0
[ 12.671511] INFO: lockdep is turned off.
[ 12.675441] CPU: 0 UID: 0 PID: 759 Comm: grep Tainted: G S W 7.0.0+ #0 PREEMPT
[ 12.675453] Tainted: [S]=CPU_OUT_OF_SPEC, [W]=WARN
[ 12.675456] Hardware name: Bananapi BPI-R64 (DT)
[ 12.675459] Call trace:
[ 12.675462] show_stack+0x14/0x1c (C)
[ 12.675477] dump_stack_lvl+0x68/0x8c
[ 12.675487] dump_stack+0x14/0x1c
[ 12.675495] __might_resched+0x14c/0x220
[ 12.675504] __might_sleep+0x44/0x80
[ 12.675511] __mutex_lock+0x50/0xb10
[ 12.675523] mutex_lock_nested+0x20/0x30
[ 12.675532] mt7530_get_stats64+0x40/0x2ac
[ 12.675542] dsa_user_get_stats64+0x2c/0x40
[ 12.675553] dev_get_stats+0x44/0x1e0
[ 12.675564] dev_seq_printf_stats+0x24/0xe0
[ 12.675575] dev_seq_show+0x14/0x3c
[ 12.675583] seq_read_iter+0x37c/0x480
[ 12.675595] seq_read+0xd0/0xec
[ 12.675605] proc_reg_read+0x94/0xe4
[ 12.675615] vfs_read+0x98/0x29c
[ 12.675625] ksys_read+0x54/0xdc
[ 12.675633] __arm64_sys_read+0x18/0x20
[ 12.675642] invoke_syscall.constprop.0+0x54/0xec
[ 12.675653] do_el0_svc+0x3c/0xb4
[ 12.675662] el0_svc+0x38/0x200
[ 12.675670] el0t_64_sync_handler+0x98/0xdc
[ 12.675679] el0t_64_sync+0x158/0x15c
For MDIO-connected switches, poll MIB counters asynchronously using a
delayed workqueue every second and let .get_stats64 return the cached
values under a spinlock. A mod_delayed_work() call on each read
triggers an immediate refresh so counters stay responsive when queried
more frequently.
MMIO-connected switches (MT7988, EN7581, AN7583) are not affected
because their regmap does not sleep, so they continue to read MIB
counters directly in .get_stats64.
Fixes: 88c810f35ed5 ("net: dsa: mt7530: implement .get_stats64")
Signed-off-by: Daniel Golle <daniel@makrotopia.org>
Acked-by: Chester A. Unal <chester.a.unal@arinc9.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
---
v5:
* spin_lock_init() and INIT_DELAYED_WORK() in probe()
* drop teardown as cancel_delayed_work_sync() in remove() is
sufficient and now symmetric with init happening in probe()
v4:
* extract mt7530_stats_refresh() helper from mt7530_stats_poll()
-> mt7530_stats_poll() now just refreshes and re-arms
* call helper synchronously in mt753x_setup() to seed the cache
* avoid zeroed counters during the first poll interval
* avoid INITIAL_JIFFIES vs stats_last==0 wraparound on 32-bit
* swap deprecated system_wq for system_percpu_wq in get_stats64
* keeps on-demand refresh on the same queue as schedule_*_work
v3:
* move `stats_last` access under the spinlock to avoid potential race
v2:
* use spin_lock_bh()/spin_unlock_bh() to prevent potential deadlock
* rate-limit mod_delayed_work() refresh to at most once per 100ms
* move cancel_delayed_work_sync() after dsa_unregister_switch()
* add mt753x_teardown() callback to cancel the stats work
* fix commit message
---
drivers/net/dsa/mt7530.c | 68 ++++++++++++++++++++++++++++++++++++++--
drivers/net/dsa/mt7530.h | 8 +++++
2 files changed, 73 insertions(+), 3 deletions(-)
diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c
index b9423389c2ef..e769914d726c 100644
--- a/drivers/net/dsa/mt7530.c
+++ b/drivers/net/dsa/mt7530.c
@@ -25,6 +25,9 @@
#include "mt7530.h"
+#define MT7530_STATS_POLL_INTERVAL (1 * HZ)
+#define MT7530_STATS_RATE_LIMIT (HZ / 10)
+
static struct mt753x_pcs *pcs_to_mt753x_pcs(struct phylink_pcs *pcs)
{
return container_of(pcs, struct mt753x_pcs, pcs);
@@ -906,10 +909,9 @@ static void mt7530_get_rmon_stats(struct dsa_switch *ds, int port,
*ranges = mt7530_rmon_ranges;
}
-static void mt7530_get_stats64(struct dsa_switch *ds, int port,
- struct rtnl_link_stats64 *storage)
+static void mt7530_read_port_stats64(struct mt7530_priv *priv, int port,
+ struct rtnl_link_stats64 *storage)
{
- struct mt7530_priv *priv = ds->priv;
uint64_t data;
/* MIB counter doesn't provide a FramesTransmittedOK but instead
@@ -951,6 +953,54 @@ static void mt7530_get_stats64(struct dsa_switch *ds, int port,
&storage->rx_crc_errors);
}
+static void mt7530_stats_refresh(struct mt7530_priv *priv)
+{
+ struct rtnl_link_stats64 stats = {};
+ struct dsa_port *dp;
+ int port;
+
+ dsa_switch_for_each_user_port(dp, priv->ds) {
+ port = dp->index;
+
+ mt7530_read_port_stats64(priv, port, &stats);
+
+ spin_lock_bh(&priv->stats_lock);
+ priv->ports[port].stats = stats;
+ priv->stats_last = jiffies;
+ spin_unlock_bh(&priv->stats_lock);
+ }
+}
+
+static void mt7530_stats_poll(struct work_struct *work)
+{
+ struct mt7530_priv *priv = container_of(work, struct mt7530_priv,
+ stats_work.work);
+
+ mt7530_stats_refresh(priv);
+ schedule_delayed_work(&priv->stats_work,
+ MT7530_STATS_POLL_INTERVAL);
+}
+
+static void mt7530_get_stats64(struct dsa_switch *ds, int port,
+ struct rtnl_link_stats64 *storage)
+{
+ struct mt7530_priv *priv = ds->priv;
+ bool refresh;
+
+ if (priv->bus) {
+ spin_lock_bh(&priv->stats_lock);
+ *storage = priv->ports[port].stats;
+ refresh = time_after(jiffies, priv->stats_last +
+ MT7530_STATS_RATE_LIMIT);
+ spin_unlock_bh(&priv->stats_lock);
+ if (refresh)
+ mod_delayed_work(system_percpu_wq,
+ &priv->stats_work, 0);
+ } else {
+ mt7530_read_port_stats64(priv, port, storage);
+ }
+}
+
static void mt7530_get_eth_ctrl_stats(struct dsa_switch *ds, int port,
struct ethtool_eth_ctrl_stats *ctrl_stats)
{
@@ -3137,6 +3187,12 @@ mt753x_setup(struct dsa_switch *ds)
if (ret && priv->irq_domain)
mt7530_free_mdio_irq(priv);
+ if (!ret && priv->bus) {
+ mt7530_stats_refresh(priv);
+ schedule_delayed_work(&priv->stats_work,
+ MT7530_STATS_POLL_INTERVAL);
+ }
+
return ret;
}
@@ -3395,6 +3451,9 @@ mt7530_probe_common(struct mt7530_priv *priv)
priv->ds->ops = &mt7530_switch_ops;
priv->ds->phylink_mac_ops = &mt753x_phylink_mac_ops;
mutex_init(&priv->reg_mutex);
+ spin_lock_init(&priv->stats_lock);
+ INIT_DELAYED_WORK(&priv->stats_work, mt7530_stats_poll);
+
dev_set_drvdata(dev, priv);
return 0;
@@ -3409,6 +3468,9 @@ mt7530_remove_common(struct mt7530_priv *priv)
dsa_unregister_switch(priv->ds);
+ if (priv->bus)
+ cancel_delayed_work_sync(&priv->stats_work);
+
mutex_destroy(&priv->reg_mutex);
}
EXPORT_SYMBOL_GPL(mt7530_remove_common);
diff --git a/drivers/net/dsa/mt7530.h b/drivers/net/dsa/mt7530.h
index 3e0090bed298..dd33b0df3419 100644
--- a/drivers/net/dsa/mt7530.h
+++ b/drivers/net/dsa/mt7530.h
@@ -796,6 +796,7 @@ struct mt7530_fdb {
* @pvid: The VLAN specified is to be considered a PVID at ingress. Any
* untagged frames will be assigned to the related VLAN.
* @sgmii_pcs: Pointer to PCS instance for SerDes ports
+ * @stats: Cached port statistics for MDIO-connected switches
*/
struct mt7530_port {
bool enable;
@@ -803,6 +804,7 @@ struct mt7530_port {
u32 pm;
u16 pvid;
struct phylink_pcs *sgmii_pcs;
+ struct rtnl_link_stats64 stats;
};
/* Port 5 mode definitions of the MT7530 switch */
@@ -875,6 +877,9 @@ struct mt753x_info {
* @create_sgmii: Pointer to function creating SGMII PCS instance(s)
* @active_cpu_ports: Holding the active CPU ports
* @mdiodev: The pointer to the MDIO device structure
+ * @stats_lock: Protects cached per-port stats from concurrent access
+ * @stats_work: Delayed work for polling MIB counters on MDIO switches
+ * @stats_last: Jiffies timestamp of last MIB counter poll
*/
struct mt7530_priv {
struct device *dev;
@@ -900,6 +905,9 @@ struct mt7530_priv {
int (*create_sgmii)(struct mt7530_priv *priv);
u8 active_cpu_ports;
struct mdio_device *mdiodev;
+ spinlock_t stats_lock; /* protects cached stats counters */
+ struct delayed_work stats_work;
+ unsigned long stats_last;
};
struct mt7530_hw_vlan_entry {
--
2.54.0
^ permalink raw reply related [flat|nested] 2+ messages in thread
* Re: [PATCH net v5] net: dsa: mt7530: fix .get_stats64 sleeping in atomic context
2026-04-30 23:13 [PATCH net v5] net: dsa: mt7530: fix .get_stats64 sleeping in atomic context Daniel Golle
@ 2026-05-02 0:15 ` Jakub Kicinski
0 siblings, 0 replies; 2+ messages in thread
From: Jakub Kicinski @ 2026-05-02 0:15 UTC (permalink / raw)
To: daniel
Cc: Jakub Kicinski, chester.a.unal, andrew, olteanv, davem, edumazet,
pabeni, matthias.bgg, angelogioacchino.delregno, linux,
ansuelsmth, netdev, linux-kernel, linux-arm-kernel,
linux-mediatek
This is an AI-generated review of your patch. The human sending this
email has considered the AI review valid, or at least plausible.
---
Subject: net: dsa: mt7530: fix .get_stats64 sleeping in atomic context
This patch moves MIB counter reads for MDIO-connected switches into a
self-rearming delayed work, and has .get_stats64 return cached values
under a spinlock. A mod_delayed_work() call on each read triggers an
immediate refresh.
> diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c
> index b9423389c2ef..e769914d726c 100644
> --- a/drivers/net/dsa/mt7530.c
> +++ b/drivers/net/dsa/mt7530.c
[ ... ]
> @@ -3137,6 +3187,12 @@ mt753x_setup(struct dsa_switch *ds)
> if (ret && priv->irq_domain)
> mt7530_free_mdio_irq(priv);
>
> + if (!ret && priv->bus) {
> + mt7530_stats_refresh(priv);
> + schedule_delayed_work(&priv->stats_work,
> + MT7530_STATS_POLL_INTERVAL);
> + }
> +
> return ret;
> }
Can the delayed work outlive priv on a probe failure path?
mt753x_setup() is installed as ds->ops->setup and is invoked from
dsa_register_switch() via:
dsa_register_switch()
dsa_switch_probe()
dsa_tree_setup()
dsa_tree_setup_switches()
dsa_switch_setup()
ds->ops->setup == mt753x_setup() /* queues stats_work */
dsa_tree_setup_ports() /* may fail here */
dsa_tree_setup_conduit() /* or here */
dsa_tree_setup_lags() /* or here */
If any of the later dsa_tree_setup_* steps fail after mt753x_setup()
has queued stats_work, dsa_tree_teardown_switches() is called, which
invokes ds->ops->teardown per switch. The mt7530 dsa_switch_ops does
not define a .teardown op, so nothing cancels the delayed work.
mt7530_probe() then propagates the error. The driver core does not
invoke .remove() after a failed probe, so mt7530_remove_common()
(which holds the cancel_delayed_work_sync()) never runs. Once
mt7530_probe() returns the error, devres releases priv and priv->ds
while stats_work is still queued. When it fires, mt7530_stats_poll()
dereferences priv, walks priv->ds->dst->ports and issues MDIO reads
via priv->bus on freed memory.
Would it be safer to cancel the work in mt753x_setup()'s own error
path (and/or add a .teardown op) so the work is stopped whenever
setup reports failure?
> @@ -3409,6 +3468,9 @@ mt7530_remove_common(struct mt7530_priv *priv)
>
> dsa_unregister_switch(priv->ds);
>
> + if (priv->bus)
> + cancel_delayed_work_sync(&priv->stats_work);
> +
> mutex_destroy(&priv->reg_mutex);
> }
Is the ordering of dsa_unregister_switch() vs. cancel_delayed_work_sync()
correct here?
dsa_unregister_switch() ends up in dsa_switch_remove():
net/dsa/dsa.c:
static void dsa_switch_remove(struct dsa_switch *ds)
{
struct dsa_switch_tree *dst = ds->dst;
dsa_tree_teardown(dst);
dsa_switch_release_ports(ds);
dsa_tree_put(dst);
}
and dsa_switch_release_ports() does list_del + kfree on every
dsa_port entry:
dsa_switch_for_each_port_safe(dp, next, ds) {
...
list_del(&dp->list);
kfree(dp);
}
Meanwhile, mt7530_stats_refresh() iterates the same list without any
locking or RCU:
dsa_switch_for_each_user_port(dp, priv->ds) {
port = dp->index;
mt7530_read_port_stats64(priv, port, &stats);
...
}
and dsa_switch_for_each_user_port expands to a plain
list_for_each_entry over ds->dst->ports.
stats_work rearms itself every MT7530_STATS_POLL_INTERVAL (one
second), and mt7530_get_stats64() can mod_delayed_work(..., 0) it to
fire immediately. With the cancel happening after
dsa_unregister_switch(), the worker can run concurrently with port
teardown and dereference dsa_port entries that have already been
list_del'd and kfree'd, or walk priv->ds->dst after dsa_tree_put()
has released it.
Should cancel_delayed_work_sync(&priv->stats_work) run before
dsa_unregister_switch(), with a guard to prevent mt7530_get_stats64()
from requeuing the work during unregister?
--
pw-bot: cr
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2026-05-02 0:15 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-30 23:13 [PATCH net v5] net: dsa: mt7530: fix .get_stats64 sleeping in atomic context Daniel Golle
2026-05-02 0:15 ` Jakub Kicinski
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox