public inbox for stable@vger.kernel.org
 help / color / mirror / Atom feed
* Re: Patch "net: enetc: fix the deadlock of enetc_mdio_lock" has been added to the 6.1-stable tree
       [not found] <20251025224340.3962503-1-sashal@kernel.org>
@ 2026-02-09  1:48 ` Jianpeng Chang
  2026-02-09  2:14   ` Sasha Levin
  0 siblings, 1 reply; 2+ messages in thread
From: Jianpeng Chang @ 2026-02-09  1:48 UTC (permalink / raw)
  To: sashal; +Cc: stable

Hi Sasha,

3 months have passed and the patch hasn't appeared in linux-6.1.y, 
though I see it has been merged into 6.12.y, 6.17.y, and 6.18.y.

The patch is also no longer in the stable-queue/queue-6.1 directory. 
Could you clarify if there was an issue preventing it from being merged 
into 6.1.y?

Thanks,
Jianpeng

在 2025/10/26 上午6:43, Sasha Levin 写道:
> CAUTION: This email comes from a non Wind River email account!
> Do not click links or open attachments unless you recognize the sender and know the content is safe.
> 
> This is a note to let you know that I've just added the patch titled
> 
>      net: enetc: fix the deadlock of enetc_mdio_lock
> 
> to the 6.1-stable tree which can be found at:
>      http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary
> 
> The filename of the patch is:
>       net-enetc-fix-the-deadlock-of-enetc_mdio_lock.patch
> and it can be found in the queue-6.1 subdirectory.
> 
> If you, or anyone else, feels it should not be added to the stable tree,
> please let <stable@vger.kernel.org> know about it.
> 
> 
> 
> commit 18c00ec29df3d353de5407578e2bfe84f63c76dc
> Author: Jianpeng Chang <jianpeng.chang.cn@windriver.com>
> Date:   Wed Oct 15 10:14:27 2025 +0800
> 
>      net: enetc: fix the deadlock of enetc_mdio_lock
> 
>      [ Upstream commit 50bd33f6b3922a6b760aa30d409cae891cec8fb5 ]
> 
>      After applying the workaround for err050089, the LS1028A platform
>      experiences RCU stalls on RT kernel. This issue is caused by the
>      recursive acquisition of the read lock enetc_mdio_lock. Here list some
>      of the call stacks identified under the enetc_poll path that may lead to
>      a deadlock:
> 
>      enetc_poll
>        -> enetc_lock_mdio
>        -> enetc_clean_rx_ring OR napi_complete_done
>           -> napi_gro_receive
>              -> enetc_start_xmit
>                 -> enetc_lock_mdio
>                 -> enetc_map_tx_buffs
>                 -> enetc_unlock_mdio
>        -> enetc_unlock_mdio
> 
>      After enetc_poll acquires the read lock, a higher-priority writer attempts
>      to acquire the lock, causing preemption. The writer detects that a
>      read lock is already held and is scheduled out. However, readers under
>      enetc_poll cannot acquire the read lock again because a writer is already
>      waiting, leading to a thread hang.
> 
>      Currently, the deadlock is avoided by adjusting enetc_lock_mdio to prevent
>      recursive lock acquisition.
> 
>      Fixes: 6d36ecdbc441 ("net: enetc: take the MDIO lock only once per NAPI poll cycle")
>      Signed-off-by: Jianpeng Chang <jianpeng.chang.cn@windriver.com>
>      Acked-by: Wei Fang <wei.fang@nxp.com>
>      Link: https://patch.msgid.link/20251015021427.180757-1-jianpeng.chang.cn@windriver.com
>      Signed-off-by: Jakub Kicinski <kuba@kernel.org>
>      Signed-off-by: Sasha Levin <sashal@kernel.org>
> 
> diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c
> index 44ae1d2c34fd6..ed1db7f056e66 100644
> --- a/drivers/net/ethernet/freescale/enetc/enetc.c
> +++ b/drivers/net/ethernet/freescale/enetc/enetc.c
> @@ -1225,6 +1225,8 @@ static int enetc_clean_rx_ring(struct enetc_bdr *rx_ring,
>          /* next descriptor to process */
>          i = rx_ring->next_to_clean;
> 
> +       enetc_lock_mdio();
> +
>          while (likely(rx_frm_cnt < work_limit)) {
>                  union enetc_rx_bd *rxbd;
>                  struct sk_buff *skb;
> @@ -1260,7 +1262,9 @@ static int enetc_clean_rx_ring(struct enetc_bdr *rx_ring,
>                  rx_byte_cnt += skb->len + ETH_HLEN;
>                  rx_frm_cnt++;
> 
> +               enetc_unlock_mdio();
>                  napi_gro_receive(napi, skb);
> +               enetc_lock_mdio();
>          }
> 
>          rx_ring->next_to_clean = i;
> @@ -1268,6 +1272,8 @@ static int enetc_clean_rx_ring(struct enetc_bdr *rx_ring,
>          rx_ring->stats.packets += rx_frm_cnt;
>          rx_ring->stats.bytes += rx_byte_cnt;
> 
> +       enetc_unlock_mdio();
> +
>          return rx_frm_cnt;
>   }
> 
> @@ -1572,6 +1578,8 @@ static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring,
>          /* next descriptor to process */
>          i = rx_ring->next_to_clean;
> 
> +       enetc_lock_mdio();
> +
>          while (likely(rx_frm_cnt < work_limit)) {
>                  union enetc_rx_bd *rxbd, *orig_rxbd;
>                  int orig_i, orig_cleaned_cnt;
> @@ -1631,7 +1639,9 @@ static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring,
>                          if (unlikely(!skb))
>                                  goto out;
> 
> +                       enetc_unlock_mdio();
>                          napi_gro_receive(napi, skb);
> +                       enetc_lock_mdio();
>                          break;
>                  case XDP_TX:
>                          tx_ring = priv->xdp_tx_ring[rx_ring->index];
> @@ -1660,7 +1670,9 @@ static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring,
>                          }
>                          break;
>                  case XDP_REDIRECT:
> +                       enetc_unlock_mdio();
>                          err = xdp_do_redirect(rx_ring->ndev, &xdp_buff, prog);
> +                       enetc_lock_mdio();
>                          if (unlikely(err)) {
>                                  enetc_xdp_drop(rx_ring, orig_i, i);
>                                  rx_ring->stats.xdp_redirect_failures++;
> @@ -1680,8 +1692,11 @@ static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring,
>          rx_ring->stats.packets += rx_frm_cnt;
>          rx_ring->stats.bytes += rx_byte_cnt;
> 
> -       if (xdp_redirect_frm_cnt)
> +       if (xdp_redirect_frm_cnt) {
> +               enetc_unlock_mdio();
>                  xdp_do_flush();
> +               enetc_lock_mdio();
> +       }
> 
>          if (xdp_tx_frm_cnt)
>                  enetc_update_tx_ring_tail(tx_ring);
> @@ -1690,6 +1705,8 @@ static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring,
>                  enetc_refill_rx_ring(rx_ring, enetc_bd_unused(rx_ring) -
>                                       rx_ring->xdp.xdp_tx_in_flight);
> 
> +       enetc_unlock_mdio();
> +
>          return rx_frm_cnt;
>   }
> 
> @@ -1708,6 +1725,7 @@ static int enetc_poll(struct napi_struct *napi, int budget)
>          for (i = 0; i < v->count_tx_rings; i++)
>                  if (!enetc_clean_tx_ring(&v->tx_ring[i], budget))
>                          complete = false;
> +       enetc_unlock_mdio();
> 
>          prog = rx_ring->xdp.prog;
>          if (prog)
> @@ -1719,10 +1737,8 @@ static int enetc_poll(struct napi_struct *napi, int budget)
>          if (work_done)
>                  v->rx_napi_work = true;
> 
> -       if (!complete) {
> -               enetc_unlock_mdio();
> +       if (!complete)
>                  return budget;
> -       }
> 
>          napi_complete_done(napi, work_done);
> 
> @@ -1731,6 +1747,7 @@ static int enetc_poll(struct napi_struct *napi, int budget)
> 
>          v->rx_napi_work = false;
> 
> +       enetc_lock_mdio();
>          /* enable interrupts */
>          enetc_wr_reg_hot(v->rbier, ENETC_RBIER_RXTIE);
> 


^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: Patch "net: enetc: fix the deadlock of enetc_mdio_lock" has been added to the 6.1-stable tree
  2026-02-09  1:48 ` Patch "net: enetc: fix the deadlock of enetc_mdio_lock" has been added to the 6.1-stable tree Jianpeng Chang
@ 2026-02-09  2:14   ` Sasha Levin
  0 siblings, 0 replies; 2+ messages in thread
From: Sasha Levin @ 2026-02-09  2:14 UTC (permalink / raw)
  To: Jianpeng Chang; +Cc: stable

On Mon, Feb 09, 2026 at 09:48:18AM +0800, Jianpeng Chang wrote:
>Hi Sasha,
>
>3 months have passed and the patch hasn't appeared in linux-6.1.y, 
>though I see it has been merged into 6.12.y, 6.17.y, and 6.18.y.
>
>The patch is also no longer in the stable-queue/queue-6.1 directory. 
>Could you clarify if there was an issue preventing it from being 
>merged into 6.1.y?

It was dropped from 6.1 here:
https://git.kernel.org/pub/scm/linux/kernel/git/stable/stable-queue.git/commit/?id=a9347bdaea6788ca8687d678bfcd88398bdeaa03

Though you haven't actually tagged the commit to be backported to stable, so
I'm not sure why you'd expect to see it in any tree to begin with.

-- 
Thanks,
Sasha

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2026-02-09  2:14 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <20251025224340.3962503-1-sashal@kernel.org>
2026-02-09  1:48 ` Patch "net: enetc: fix the deadlock of enetc_mdio_lock" has been added to the 6.1-stable tree Jianpeng Chang
2026-02-09  2:14   ` Sasha Levin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox