* [v3 PATCH net] net: enetc: fix the deadlock of enetc_mdio_lock
@ 2025-10-09 1:32 Jianpeng Chang
2025-10-10 9:31 ` Wei Fang
0 siblings, 1 reply; 6+ messages in thread
From: Jianpeng Chang @ 2025-10-09 1:32 UTC (permalink / raw)
To: claudiu.manoil, vladimir.oltean, wei.fang, xiaoning.wang,
andrew+netdev, davem, edumazet, kuba, pabeni, alexandru.marginean
Cc: imx, netdev, linux-kernel, Jianpeng Chang
After applying the workaround for err050089, the LS1028A platform
experiences RCU stalls on RT kernel. This issue is caused by the
recursive acquisition of the read lock enetc_mdio_lock. Here list some
of the call stacks identified under the enetc_poll path that may lead to
a deadlock:
enetc_poll
-> enetc_lock_mdio
-> enetc_clean_rx_ring OR napi_complete_done
-> napi_gro_receive
-> enetc_start_xmit
-> enetc_lock_mdio
-> enetc_map_tx_buffs
-> enetc_unlock_mdio
-> enetc_unlock_mdio
After enetc_poll acquires the read lock, a higher-priority writer attempts
to acquire the lock, causing preemption. The writer detects that a
read lock is already held and is scheduled out. However, readers under
enetc_poll cannot acquire the read lock again because a writer is already
waiting, leading to a thread hang.
Currently, the deadlock is avoided by adjusting enetc_lock_mdio to prevent
recursive lock acquisition.
Fixes: 6d36ecdbc441 ("net: enetc: take the MDIO lock only once per NAPI poll cycle")
Signed-off-by: Jianpeng Chang <jianpeng.chang.cn@windriver.com>
---
v3:
- remove the curly braces
v2:https://lore.kernel.org/netdev/20250925021152.1674197-1-jianpeng.chang.cn@windriver.com/
- change the fix line and subject.
- add blank line before return.
v1:https://lore.kernel.org/netdev/20250924054704.2795474-1-jianpeng.chang.cn@windriver.com/
drivers/net/ethernet/freescale/enetc/enetc.c | 18 +++++++++++++++---
1 file changed, 15 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c
index aae462a0cf5a..27f53f1bbdf7 100644
--- a/drivers/net/ethernet/freescale/enetc/enetc.c
+++ b/drivers/net/ethernet/freescale/enetc/enetc.c
@@ -1595,6 +1595,8 @@ static int enetc_clean_rx_ring(struct enetc_bdr *rx_ring,
/* next descriptor to process */
i = rx_ring->next_to_clean;
+ enetc_lock_mdio();
+
while (likely(rx_frm_cnt < work_limit)) {
union enetc_rx_bd *rxbd;
struct sk_buff *skb;
@@ -1630,7 +1632,9 @@ static int enetc_clean_rx_ring(struct enetc_bdr *rx_ring,
rx_byte_cnt += skb->len + ETH_HLEN;
rx_frm_cnt++;
+ enetc_unlock_mdio();
napi_gro_receive(napi, skb);
+ enetc_lock_mdio();
}
rx_ring->next_to_clean = i;
@@ -1638,6 +1642,8 @@ static int enetc_clean_rx_ring(struct enetc_bdr *rx_ring,
rx_ring->stats.packets += rx_frm_cnt;
rx_ring->stats.bytes += rx_byte_cnt;
+ enetc_unlock_mdio();
+
return rx_frm_cnt;
}
@@ -1947,6 +1953,8 @@ static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring,
/* next descriptor to process */
i = rx_ring->next_to_clean;
+ enetc_lock_mdio();
+
while (likely(rx_frm_cnt < work_limit)) {
union enetc_rx_bd *rxbd, *orig_rxbd;
struct xdp_buff xdp_buff;
@@ -2010,7 +2018,9 @@ static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring,
*/
enetc_bulk_flip_buff(rx_ring, orig_i, i);
+ enetc_unlock_mdio();
napi_gro_receive(napi, skb);
+ enetc_lock_mdio();
break;
case XDP_TX:
tx_ring = priv->xdp_tx_ring[rx_ring->index];
@@ -2075,6 +2085,8 @@ static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring,
enetc_refill_rx_ring(rx_ring, enetc_bd_unused(rx_ring) -
rx_ring->xdp.xdp_tx_in_flight);
+ enetc_unlock_mdio();
+
return rx_frm_cnt;
}
@@ -2093,6 +2105,7 @@ static int enetc_poll(struct napi_struct *napi, int budget)
for (i = 0; i < v->count_tx_rings; i++)
if (!enetc_clean_tx_ring(&v->tx_ring[i], budget))
complete = false;
+ enetc_unlock_mdio();
prog = rx_ring->xdp.prog;
if (prog)
@@ -2104,10 +2117,8 @@ static int enetc_poll(struct napi_struct *napi, int budget)
if (work_done)
v->rx_napi_work = true;
- if (!complete) {
- enetc_unlock_mdio();
+ if (!complete)
return budget;
- }
napi_complete_done(napi, work_done);
@@ -2116,6 +2127,7 @@ static int enetc_poll(struct napi_struct *napi, int budget)
v->rx_napi_work = false;
+ enetc_lock_mdio();
/* enable interrupts */
enetc_wr_reg_hot(v->rbier, ENETC_RBIER_RXTIE);
--
2.51.0
^ permalink raw reply related [flat|nested] 6+ messages in thread* RE: [v3 PATCH net] net: enetc: fix the deadlock of enetc_mdio_lock
2025-10-09 1:32 [v3 PATCH net] net: enetc: fix the deadlock of enetc_mdio_lock Jianpeng Chang
@ 2025-10-10 9:31 ` Wei Fang
2025-10-10 10:51 ` Vladimir Oltean
0 siblings, 1 reply; 6+ messages in thread
From: Wei Fang @ 2025-10-10 9:31 UTC (permalink / raw)
To: Jianpeng Chang, Vladimir Oltean
Cc: imx@lists.linux.dev, netdev@vger.kernel.org,
linux-kernel@vger.kernel.org, Claudiu Manoil, Clark Wang,
andrew+netdev@lunn.ch, davem@davemloft.net, edumazet@google.com,
kuba@kernel.org, pabeni@redhat.com, Alexandru Marginean
> After applying the workaround for err050089, the LS1028A platform
> experiences RCU stalls on RT kernel. This issue is caused by the
> recursive acquisition of the read lock enetc_mdio_lock. Here list some
> of the call stacks identified under the enetc_poll path that may lead to
> a deadlock:
>
> enetc_poll
> -> enetc_lock_mdio
> -> enetc_clean_rx_ring OR napi_complete_done
> -> napi_gro_receive
> -> enetc_start_xmit
> -> enetc_lock_mdio
> -> enetc_map_tx_buffs
> -> enetc_unlock_mdio
> -> enetc_unlock_mdio
>
> After enetc_poll acquires the read lock, a higher-priority writer attempts
> to acquire the lock, causing preemption. The writer detects that a
> read lock is already held and is scheduled out. However, readers under
> enetc_poll cannot acquire the read lock again because a writer is already
> waiting, leading to a thread hang.
>
> Currently, the deadlock is avoided by adjusting enetc_lock_mdio to prevent
> recursive lock acquisition.
>
> Fixes: 6d36ecdbc441 ("net: enetc: take the MDIO lock only once per NAPI poll
> cycle")
> Signed-off-by: Jianpeng Chang <jianpeng.chang.cn@windriver.com>
Acked-by: Wei Fang <wei.fang@nxp.com>
Hi Vladimir,
Do you have any comments? This patch will cause the regression of performance
degradation, but the RCU stalls are more severe.
^ permalink raw reply [flat|nested] 6+ messages in thread* Re: [v3 PATCH net] net: enetc: fix the deadlock of enetc_mdio_lock
2025-10-10 9:31 ` Wei Fang
@ 2025-10-10 10:51 ` Vladimir Oltean
2025-10-10 11:08 ` Vladimir Oltean
0 siblings, 1 reply; 6+ messages in thread
From: Vladimir Oltean @ 2025-10-10 10:51 UTC (permalink / raw)
To: Wei Fang
Cc: Jianpeng Chang, imx@lists.linux.dev, netdev@vger.kernel.org,
linux-kernel@vger.kernel.org, Claudiu Manoil, Clark Wang,
andrew+netdev@lunn.ch, davem@davemloft.net, edumazet@google.com,
kuba@kernel.org, pabeni@redhat.com, Alexandru Marginean
On Fri, Oct 10, 2025 at 12:31:37PM +0300, Wei Fang wrote:
> > After applying the workaround for err050089, the LS1028A platform
> > experiences RCU stalls on RT kernel. This issue is caused by the
> > recursive acquisition of the read lock enetc_mdio_lock. Here list some
> > of the call stacks identified under the enetc_poll path that may lead to
> > a deadlock:
> >
> > enetc_poll
> > -> enetc_lock_mdio
> > -> enetc_clean_rx_ring OR napi_complete_done
> > -> napi_gro_receive
> > -> enetc_start_xmit
> > -> enetc_lock_mdio
> > -> enetc_map_tx_buffs
> > -> enetc_unlock_mdio
> > -> enetc_unlock_mdio
> >
> > After enetc_poll acquires the read lock, a higher-priority writer attempts
> > to acquire the lock, causing preemption. The writer detects that a
> > read lock is already held and is scheduled out. However, readers under
> > enetc_poll cannot acquire the read lock again because a writer is already
> > waiting, leading to a thread hang.
> >
> > Currently, the deadlock is avoided by adjusting enetc_lock_mdio to prevent
> > recursive lock acquisition.
> >
> > Fixes: 6d36ecdbc441 ("net: enetc: take the MDIO lock only once per NAPI poll
> > cycle")
> > Signed-off-by: Jianpeng Chang <jianpeng.chang.cn@windriver.com>
>
> Acked-by: Wei Fang <wei.fang@nxp.com>
>
> Hi Vladimir,
>
> Do you have any comments? This patch will cause the regression of performance
> degradation, but the RCU stalls are more severe.
>
I'm fine with the change in principle. It's my fault because I didn't
understand how rwlock writer starvation prevention is implemented, I
thought there would be no problem with reentrant readers.
But I wonder if xdp_do_flush() shouldn't also be outside the enetc_lock_mdio()
section. Flushing XDP buffs with XDP_REDIRECT action might lead to
enetc_xdp_xmit() being called, which also takes the lock...
^ permalink raw reply [flat|nested] 6+ messages in thread* Re: [v3 PATCH net] net: enetc: fix the deadlock of enetc_mdio_lock
2025-10-10 10:51 ` Vladimir Oltean
@ 2025-10-10 11:08 ` Vladimir Oltean
2025-10-14 3:06 ` Jianpeng Chang
0 siblings, 1 reply; 6+ messages in thread
From: Vladimir Oltean @ 2025-10-10 11:08 UTC (permalink / raw)
To: Wei Fang
Cc: Jianpeng Chang, imx@lists.linux.dev, netdev@vger.kernel.org,
linux-kernel@vger.kernel.org, Claudiu Manoil, Clark Wang,
andrew+netdev@lunn.ch, davem@davemloft.net, edumazet@google.com,
kuba@kernel.org, pabeni@redhat.com, Alexandru Marginean
On Fri, Oct 10, 2025 at 01:51:38PM +0300, Vladimir Oltean wrote:
> On Fri, Oct 10, 2025 at 12:31:37PM +0300, Wei Fang wrote:
> > > After applying the workaround for err050089, the LS1028A platform
> > > experiences RCU stalls on RT kernel. This issue is caused by the
> > > recursive acquisition of the read lock enetc_mdio_lock. Here list some
> > > of the call stacks identified under the enetc_poll path that may lead to
> > > a deadlock:
> > >
> > > enetc_poll
> > > -> enetc_lock_mdio
> > > -> enetc_clean_rx_ring OR napi_complete_done
> > > -> napi_gro_receive
> > > -> enetc_start_xmit
> > > -> enetc_lock_mdio
> > > -> enetc_map_tx_buffs
> > > -> enetc_unlock_mdio
> > > -> enetc_unlock_mdio
> > >
> > > After enetc_poll acquires the read lock, a higher-priority writer attempts
> > > to acquire the lock, causing preemption. The writer detects that a
> > > read lock is already held and is scheduled out. However, readers under
> > > enetc_poll cannot acquire the read lock again because a writer is already
> > > waiting, leading to a thread hang.
> > >
> > > Currently, the deadlock is avoided by adjusting enetc_lock_mdio to prevent
> > > recursive lock acquisition.
> > >
> > > Fixes: 6d36ecdbc441 ("net: enetc: take the MDIO lock only once per NAPI poll
> > > cycle")
> > > Signed-off-by: Jianpeng Chang <jianpeng.chang.cn@windriver.com>
> >
> > Acked-by: Wei Fang <wei.fang@nxp.com>
> >
> > Hi Vladimir,
> >
> > Do you have any comments? This patch will cause the regression of performance
> > degradation, but the RCU stalls are more severe.
> >
>
> I'm fine with the change in principle. It's my fault because I didn't
> understand how rwlock writer starvation prevention is implemented, I
> thought there would be no problem with reentrant readers.
>
> But I wonder if xdp_do_flush() shouldn't also be outside the enetc_lock_mdio()
> section. Flushing XDP buffs with XDP_REDIRECT action might lead to
> enetc_xdp_xmit() being called, which also takes the lock...
And I think the same concern exists for the xdp_do_redirect() calls.
Most of the time it will be fine, but when the batch fills up it will be
auto-flushed by bq_enqueue():
if (unlikely(bq->count == DEV_MAP_BULK_SIZE))
bq_xmit_all(bq, 0);
^ permalink raw reply [flat|nested] 6+ messages in thread* Re: [v3 PATCH net] net: enetc: fix the deadlock of enetc_mdio_lock
2025-10-10 11:08 ` Vladimir Oltean
@ 2025-10-14 3:06 ` Jianpeng Chang
2025-10-14 3:24 ` Wei Fang
0 siblings, 1 reply; 6+ messages in thread
From: Jianpeng Chang @ 2025-10-14 3:06 UTC (permalink / raw)
To: Vladimir Oltean, Wei Fang
Cc: imx@lists.linux.dev, netdev@vger.kernel.org,
linux-kernel@vger.kernel.org, Claudiu Manoil, Clark Wang,
andrew+netdev@lunn.ch, davem@davemloft.net, edumazet@google.com,
kuba@kernel.org, pabeni@redhat.com, Alexandru Marginean
在 2025/10/10 19:08, Vladimir Oltean 写道:
> CAUTION: This email comes from a non Wind River email account!
> Do not click links or open attachments unless you recognize the sender and know the content is safe.
>
> On Fri, Oct 10, 2025 at 01:51:38PM +0300, Vladimir Oltean wrote:
>> On Fri, Oct 10, 2025 at 12:31:37PM +0300, Wei Fang wrote:
>>>> After applying the workaround for err050089, the LS1028A platform
>>>> experiences RCU stalls on RT kernel. This issue is caused by the
>>>> recursive acquisition of the read lock enetc_mdio_lock. Here list some
>>>> of the call stacks identified under the enetc_poll path that may lead to
>>>> a deadlock:
>>>>
>>>> enetc_poll
>>>> -> enetc_lock_mdio
>>>> -> enetc_clean_rx_ring OR napi_complete_done
>>>> -> napi_gro_receive
>>>> -> enetc_start_xmit
>>>> -> enetc_lock_mdio
>>>> -> enetc_map_tx_buffs
>>>> -> enetc_unlock_mdio
>>>> -> enetc_unlock_mdio
>>>>
>>>> After enetc_poll acquires the read lock, a higher-priority writer attempts
>>>> to acquire the lock, causing preemption. The writer detects that a
>>>> read lock is already held and is scheduled out. However, readers under
>>>> enetc_poll cannot acquire the read lock again because a writer is already
>>>> waiting, leading to a thread hang.
>>>>
>>>> Currently, the deadlock is avoided by adjusting enetc_lock_mdio to prevent
>>>> recursive lock acquisition.
>>>>
>>>> Fixes: 6d36ecdbc441 ("net: enetc: take the MDIO lock only once per NAPI poll
>>>> cycle")
>>>> Signed-off-by: Jianpeng Chang <jianpeng.chang.cn@windriver.com>
>>> Acked-by: Wei Fang <wei.fang@nxp.com>
>>>
>>> Hi Vladimir,
>>>
>>> Do you have any comments? This patch will cause the regression of performance
>>> degradation, but the RCU stalls are more severe.
>>>
>> I'm fine with the change in principle. It's my fault because I didn't
>> understand how rwlock writer starvation prevention is implemented, I
>> thought there would be no problem with reentrant readers.
>>
>> But I wonder if xdp_do_flush() shouldn't also be outside the enetc_lock_mdio()
>> section. Flushing XDP buffs with XDP_REDIRECT action might lead to
>> enetc_xdp_xmit() being called, which also takes the lock...
> And I think the same concern exists for the xdp_do_redirect() calls.
> Most of the time it will be fine, but when the batch fills up it will be
> auto-flushed by bq_enqueue():
>
> if (unlikely(bq->count == DEV_MAP_BULK_SIZE))
> bq_xmit_all(bq, 0);
Hi Vladimir, Wei,
If xdp_do_flush and xdp_do_redirect can potentially call enetc_xdp_xmit,
we should move them outside of enetc_lock_mdio.
If there are no further comments, I will repost the patch with fixes for
xdp_do_flush and xdp_do_redirect.
Thanks,
Jianpeng
^ permalink raw reply [flat|nested] 6+ messages in thread* RE: [v3 PATCH net] net: enetc: fix the deadlock of enetc_mdio_lock
2025-10-14 3:06 ` Jianpeng Chang
@ 2025-10-14 3:24 ` Wei Fang
0 siblings, 0 replies; 6+ messages in thread
From: Wei Fang @ 2025-10-14 3:24 UTC (permalink / raw)
To: Jianpeng Chang, Vladimir Oltean
Cc: imx@lists.linux.dev, netdev@vger.kernel.org,
linux-kernel@vger.kernel.org, Claudiu Manoil, Clark Wang,
andrew+netdev@lunn.ch, davem@davemloft.net, edumazet@google.com,
kuba@kernel.org, pabeni@redhat.com, Alexandru Marginean
> > On Fri, Oct 10, 2025 at 01:51:38PM +0300, Vladimir Oltean wrote:
> >> On Fri, Oct 10, 2025 at 12:31:37PM +0300, Wei Fang wrote:
> >>>> After applying the workaround for err050089, the LS1028A platform
> >>>> experiences RCU stalls on RT kernel. This issue is caused by the
> >>>> recursive acquisition of the read lock enetc_mdio_lock. Here list some
> >>>> of the call stacks identified under the enetc_poll path that may lead to
> >>>> a deadlock:
> >>>>
> >>>> enetc_poll
> >>>> -> enetc_lock_mdio
> >>>> -> enetc_clean_rx_ring OR napi_complete_done
> >>>> -> napi_gro_receive
> >>>> -> enetc_start_xmit
> >>>> -> enetc_lock_mdio
> >>>> -> enetc_map_tx_buffs
> >>>> -> enetc_unlock_mdio
> >>>> -> enetc_unlock_mdio
> >>>>
> >>>> After enetc_poll acquires the read lock, a higher-priority writer attempts
> >>>> to acquire the lock, causing preemption. The writer detects that a
> >>>> read lock is already held and is scheduled out. However, readers under
> >>>> enetc_poll cannot acquire the read lock again because a writer is already
> >>>> waiting, leading to a thread hang.
> >>>>
> >>>> Currently, the deadlock is avoided by adjusting enetc_lock_mdio to prevent
> >>>> recursive lock acquisition.
> >>>>
> >>>> Fixes: 6d36ecdbc441 ("net: enetc: take the MDIO lock only once per NAPI
> poll
> >>>> cycle")
> >>>> Signed-off-by: Jianpeng Chang <jianpeng.chang.cn@windriver.com>
> >>> Acked-by: Wei Fang <wei.fang@nxp.com>
> >>>
> >>> Hi Vladimir,
> >>>
> >>> Do you have any comments? This patch will cause the regression of
> performance
> >>> degradation, but the RCU stalls are more severe.
> >>>
> >> I'm fine with the change in principle. It's my fault because I didn't
> >> understand how rwlock writer starvation prevention is implemented, I
> >> thought there would be no problem with reentrant readers.
> >>
> >> But I wonder if xdp_do_flush() shouldn't also be outside the
> enetc_lock_mdio()
> >> section. Flushing XDP buffs with XDP_REDIRECT action might lead to
> >> enetc_xdp_xmit() being called, which also takes the lock...
> > And I think the same concern exists for the xdp_do_redirect() calls.
> > Most of the time it will be fine, but when the batch fills up it will be
> > auto-flushed by bq_enqueue():
> >
> > if (unlikely(bq->count == DEV_MAP_BULK_SIZE))
> > bq_xmit_all(bq, 0);
>
> Hi Vladimir, Wei,
>
> If xdp_do_flush and xdp_do_redirect can potentially call enetc_xdp_xmit,
> we should move them outside of enetc_lock_mdio.
>
> If there are no further comments, I will repost the patch with fixes for
> xdp_do_flush and xdp_do_redirect.
>
Many thanks, I have no further comments.
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2025-10-14 3:24 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-09 1:32 [v3 PATCH net] net: enetc: fix the deadlock of enetc_mdio_lock Jianpeng Chang
2025-10-10 9:31 ` Wei Fang
2025-10-10 10:51 ` Vladimir Oltean
2025-10-10 11:08 ` Vladimir Oltean
2025-10-14 3:06 ` Jianpeng Chang
2025-10-14 3:24 ` Wei Fang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox