linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] net: macb: Fix tx_ptr_lock locking
@ 2025-08-28 16:00 Sean Anderson
  2025-08-28 16:01 ` Sean Anderson
  2025-08-28 16:13 ` Robert Hancock
  0 siblings, 2 replies; 4+ messages in thread
From: Sean Anderson @ 2025-08-28 16:00 UTC (permalink / raw)
  To: Andrew Lunn, David S . Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, netdev
  Cc: Robert Hancock, Mike Galbraith, Claudiu Beznea, linux-kernel,
	Nicolas Ferre, Sean Anderson

macb_start_xmit can be called with bottom-halves disabled (e.g.
transmitting from softirqs) as well as with interrupts disabled (with
netpoll). Because of this, all other functions taking tx_ptr_lock must
disable IRQs, and macb_start_xmit must only re-enable IRQs if they
were already enabled.

Fixes: 138badbc21a0 ("net: macb: use NAPI for TX completion path")
Reported-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Sean Anderson <sean.anderson@linux.dev>
---

 drivers/net/ethernet/cadence/macb_main.c | 25 ++++++++++++------------
 1 file changed, 13 insertions(+), 12 deletions(-)

diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
index 16d28a8b3b56..b0a8dfa341ea 100644
--- a/drivers/net/ethernet/cadence/macb_main.c
+++ b/drivers/net/ethernet/cadence/macb_main.c
@@ -1228,7 +1228,7 @@ static int macb_tx_complete(struct macb_queue *queue, int budget)
 	int packets = 0;
 	u32 bytes = 0;
 
-	spin_lock(&queue->tx_ptr_lock);
+	spin_lock_irq(&queue->tx_ptr_lock);
 	head = queue->tx_head;
 	for (tail = queue->tx_tail; tail != head && packets < budget; tail++) {
 		struct macb_tx_skb	*tx_skb;
@@ -1291,7 +1291,7 @@ static int macb_tx_complete(struct macb_queue *queue, int budget)
 	    CIRC_CNT(queue->tx_head, queue->tx_tail,
 		     bp->tx_ring_size) <= MACB_TX_WAKEUP_THRESH(bp))
 		netif_wake_subqueue(bp->dev, queue_index);
-	spin_unlock(&queue->tx_ptr_lock);
+	spin_unlock_irq(&queue->tx_ptr_lock);
 
 	return packets;
 }
@@ -1708,7 +1708,7 @@ static void macb_tx_restart(struct macb_queue *queue)
 	struct macb *bp = queue->bp;
 	unsigned int head_idx, tbqp;
 
-	spin_lock(&queue->tx_ptr_lock);
+	spin_lock_irq(&queue->tx_ptr_lock);
 
 	if (queue->tx_head == queue->tx_tail)
 		goto out_tx_ptr_unlock;
@@ -1720,19 +1720,19 @@ static void macb_tx_restart(struct macb_queue *queue)
 	if (tbqp == head_idx)
 		goto out_tx_ptr_unlock;
 
-	spin_lock_irq(&bp->lock);
+	spin_lock(&bp->lock);
 	macb_writel(bp, NCR, macb_readl(bp, NCR) | MACB_BIT(TSTART));
-	spin_unlock_irq(&bp->lock);
+	spin_unlock(&bp->lock);
 
 out_tx_ptr_unlock:
-	spin_unlock(&queue->tx_ptr_lock);
+	spin_unlock_irq(&queue->tx_ptr_lock);
 }
 
 static bool macb_tx_complete_pending(struct macb_queue *queue)
 {
 	bool retval = false;
 
-	spin_lock(&queue->tx_ptr_lock);
+	spin_lock_irq(&queue->tx_ptr_lock);
 	if (queue->tx_head != queue->tx_tail) {
 		/* Make hw descriptor updates visible to CPU */
 		rmb();
@@ -1740,7 +1740,7 @@ static bool macb_tx_complete_pending(struct macb_queue *queue)
 		if (macb_tx_desc(queue, queue->tx_tail)->ctrl & MACB_BIT(TX_USED))
 			retval = true;
 	}
-	spin_unlock(&queue->tx_ptr_lock);
+	spin_unlock_irq(&queue->tx_ptr_lock);
 	return retval;
 }
 
@@ -2308,6 +2308,7 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	struct macb_queue *queue = &bp->queues[queue_index];
 	unsigned int desc_cnt, nr_frags, frag_size, f;
 	unsigned int hdrlen;
+	unsigned long flags;
 	bool is_lso;
 	netdev_tx_t ret = NETDEV_TX_OK;
 
@@ -2368,7 +2369,7 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *skb, struct net_device *dev)
 		desc_cnt += DIV_ROUND_UP(frag_size, bp->max_tx_length);
 	}
 
-	spin_lock_bh(&queue->tx_ptr_lock);
+	spin_lock_irqsave(&queue->tx_ptr_lock, flags);
 
 	/* This is a hard error, log it. */
 	if (CIRC_SPACE(queue->tx_head, queue->tx_tail,
@@ -2392,15 +2393,15 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	netdev_tx_sent_queue(netdev_get_tx_queue(bp->dev, queue_index),
 			     skb->len);
 
-	spin_lock_irq(&bp->lock);
+	spin_lock(&bp->lock);
 	macb_writel(bp, NCR, macb_readl(bp, NCR) | MACB_BIT(TSTART));
-	spin_unlock_irq(&bp->lock);
+	spin_unlock(&bp->lock);
 
 	if (CIRC_SPACE(queue->tx_head, queue->tx_tail, bp->tx_ring_size) < 1)
 		netif_stop_subqueue(dev, queue_index);
 
 unlock:
-	spin_unlock_bh(&queue->tx_ptr_lock);
+	spin_unlock_irqrestore(&queue->tx_ptr_lock, flags);
 
 	return ret;
 }
-- 
2.35.1.1320.gc452695387.dirty


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH] net: macb: Fix tx_ptr_lock locking
  2025-08-28 16:00 [PATCH] net: macb: Fix tx_ptr_lock locking Sean Anderson
@ 2025-08-28 16:01 ` Sean Anderson
  2025-08-28 16:13 ` Robert Hancock
  1 sibling, 0 replies; 4+ messages in thread
From: Sean Anderson @ 2025-08-28 16:01 UTC (permalink / raw)
  To: Andrew Lunn, David S . Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, netdev
  Cc: Robert Hancock, Mike Galbraith, Claudiu Beznea, linux-kernel,
	Nicolas Ferre

On 8/28/25 12:00, Sean Anderson wrote:
> macb_start_xmit can be called with bottom-halves disabled (e.g.
> transmitting from softirqs) as well as with interrupts disabled (with
> netpoll). Because of this, all other functions taking tx_ptr_lock must
> disable IRQs, and macb_start_xmit must only re-enable IRQs if they
> were already enabled.
> 
> Fixes: 138badbc21a0 ("net: macb: use NAPI for TX completion path")
> Reported-by: Mike Galbraith <efault@gmx.de>
> Signed-off-by: Sean Anderson <sean.anderson@linux.dev>
> ---
> 
>  drivers/net/ethernet/cadence/macb_main.c | 25 ++++++++++++------------
>  1 file changed, 13 insertions(+), 12 deletions(-)
> 
> diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
> index 16d28a8b3b56..b0a8dfa341ea 100644
> --- a/drivers/net/ethernet/cadence/macb_main.c
> +++ b/drivers/net/ethernet/cadence/macb_main.c
> @@ -1228,7 +1228,7 @@ static int macb_tx_complete(struct macb_queue *queue, int budget)
>  	int packets = 0;
>  	u32 bytes = 0;
>  
> -	spin_lock(&queue->tx_ptr_lock);
> +	spin_lock_irq(&queue->tx_ptr_lock);
>  	head = queue->tx_head;
>  	for (tail = queue->tx_tail; tail != head && packets < budget; tail++) {
>  		struct macb_tx_skb	*tx_skb;
> @@ -1291,7 +1291,7 @@ static int macb_tx_complete(struct macb_queue *queue, int budget)
>  	    CIRC_CNT(queue->tx_head, queue->tx_tail,
>  		     bp->tx_ring_size) <= MACB_TX_WAKEUP_THRESH(bp))
>  		netif_wake_subqueue(bp->dev, queue_index);
> -	spin_unlock(&queue->tx_ptr_lock);
> +	spin_unlock_irq(&queue->tx_ptr_lock);
>  
>  	return packets;
>  }
> @@ -1708,7 +1708,7 @@ static void macb_tx_restart(struct macb_queue *queue)
>  	struct macb *bp = queue->bp;
>  	unsigned int head_idx, tbqp;
>  
> -	spin_lock(&queue->tx_ptr_lock);
> +	spin_lock_irq(&queue->tx_ptr_lock);
>  
>  	if (queue->tx_head == queue->tx_tail)
>  		goto out_tx_ptr_unlock;
> @@ -1720,19 +1720,19 @@ static void macb_tx_restart(struct macb_queue *queue)
>  	if (tbqp == head_idx)
>  		goto out_tx_ptr_unlock;
>  
> -	spin_lock_irq(&bp->lock);
> +	spin_lock(&bp->lock);
>  	macb_writel(bp, NCR, macb_readl(bp, NCR) | MACB_BIT(TSTART));
> -	spin_unlock_irq(&bp->lock);
> +	spin_unlock(&bp->lock);
>  
>  out_tx_ptr_unlock:
> -	spin_unlock(&queue->tx_ptr_lock);
> +	spin_unlock_irq(&queue->tx_ptr_lock);
>  }
>  
>  static bool macb_tx_complete_pending(struct macb_queue *queue)
>  {
>  	bool retval = false;
>  
> -	spin_lock(&queue->tx_ptr_lock);
> +	spin_lock_irq(&queue->tx_ptr_lock);
>  	if (queue->tx_head != queue->tx_tail) {
>  		/* Make hw descriptor updates visible to CPU */
>  		rmb();
> @@ -1740,7 +1740,7 @@ static bool macb_tx_complete_pending(struct macb_queue *queue)
>  		if (macb_tx_desc(queue, queue->tx_tail)->ctrl & MACB_BIT(TX_USED))
>  			retval = true;
>  	}
> -	spin_unlock(&queue->tx_ptr_lock);
> +	spin_unlock_irq(&queue->tx_ptr_lock);
>  	return retval;
>  }
>  
> @@ -2308,6 +2308,7 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *skb, struct net_device *dev)
>  	struct macb_queue *queue = &bp->queues[queue_index];
>  	unsigned int desc_cnt, nr_frags, frag_size, f;
>  	unsigned int hdrlen;
> +	unsigned long flags;
>  	bool is_lso;
>  	netdev_tx_t ret = NETDEV_TX_OK;
>  
> @@ -2368,7 +2369,7 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *skb, struct net_device *dev)
>  		desc_cnt += DIV_ROUND_UP(frag_size, bp->max_tx_length);
>  	}
>  
> -	spin_lock_bh(&queue->tx_ptr_lock);
> +	spin_lock_irqsave(&queue->tx_ptr_lock, flags);
>  
>  	/* This is a hard error, log it. */
>  	if (CIRC_SPACE(queue->tx_head, queue->tx_tail,
> @@ -2392,15 +2393,15 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *skb, struct net_device *dev)
>  	netdev_tx_sent_queue(netdev_get_tx_queue(bp->dev, queue_index),
>  			     skb->len);
>  
> -	spin_lock_irq(&bp->lock);
> +	spin_lock(&bp->lock);
>  	macb_writel(bp, NCR, macb_readl(bp, NCR) | MACB_BIT(TSTART));
> -	spin_unlock_irq(&bp->lock);
> +	spin_unlock(&bp->lock);
>  
>  	if (CIRC_SPACE(queue->tx_head, queue->tx_tail, bp->tx_ring_size) < 1)
>  		netif_stop_subqueue(dev, queue_index);
>  
>  unlock:
> -	spin_unlock_bh(&queue->tx_ptr_lock);
> +	spin_unlock_irqrestore(&queue->tx_ptr_lock, flags);
>  
>  	return ret;
>  }

Sorry, this should be [PATCH net].

--Sean

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] net: macb: Fix tx_ptr_lock locking
  2025-08-28 16:00 [PATCH] net: macb: Fix tx_ptr_lock locking Sean Anderson
  2025-08-28 16:01 ` Sean Anderson
@ 2025-08-28 16:13 ` Robert Hancock
  2025-08-28 17:19   ` Sean Anderson
  1 sibling, 1 reply; 4+ messages in thread
From: Robert Hancock @ 2025-08-28 16:13 UTC (permalink / raw)
  To: sean.anderson@linux.dev, netdev@vger.kernel.org,
	edumazet@google.com, andrew+netdev@lunn.ch, davem@davemloft.net,
	pabeni@redhat.com, kuba@kernel.org
  Cc: nicolas.ferre@microchip.com, efault@gmx.de,
	linux-kernel@vger.kernel.org, claudiu.beznea@tuxon.dev

On Thu, 2025-08-28 at 12:00 -0400, Sean Anderson wrote:
> macb_start_xmit can be called with bottom-halves disabled (e.g.
> transmitting from softirqs) as well as with interrupts disabled (with
> netpoll). Because of this, all other functions taking tx_ptr_lock
> must
> disable IRQs, and macb_start_xmit must only re-enable IRQs if they
> were already enabled.
> 
> Fixes: 138badbc21a0 ("net: macb: use NAPI for TX completion path")
> Reported-by: Mike Galbraith <efault@gmx.de>
> Signed-off-by: Sean Anderson <sean.anderson@linux.dev>
> ---
> 
>  drivers/net/ethernet/cadence/macb_main.c | 25 ++++++++++++----------
> --
>  1 file changed, 13 insertions(+), 12 deletions(-)
> 
> diff --git a/drivers/net/ethernet/cadence/macb_main.c
> b/drivers/net/ethernet/cadence/macb_main.c
> index 16d28a8b3b56..b0a8dfa341ea 100644
> --- a/drivers/net/ethernet/cadence/macb_main.c
> +++ b/drivers/net/ethernet/cadence/macb_main.c
> @@ -1228,7 +1228,7 @@ static int macb_tx_complete(struct macb_queue
> *queue, int budget)
>         int packets = 0;
>         u32 bytes = 0;
> 
> -       spin_lock(&queue->tx_ptr_lock);
> +       spin_lock_irq(&queue->tx_ptr_lock);
> 

Hm, I think I used a non-IRQ lock here to avoid potentially disabling
interrupts for so long during TX completion processing. I don't think I
considered the netpoll case where start_xmit can be called with IRQs
disabled however. Not sure if there is a better solution to satisfy
that case without turning IRQs off entirely here?

>         head = queue->tx_head;
>         for (tail = queue->tx_tail; tail != head && packets < budget;
> tail++) {
>                 struct macb_tx_skb      *tx_skb;
> @@ -1291,7 +1291,7 @@ static int macb_tx_complete(struct macb_queue
> *queue, int budget)
>             CIRC_CNT(queue->tx_head, queue->tx_tail,
>                      bp->tx_ring_size) <= MACB_TX_WAKEUP_THRESH(bp))
>                 netif_wake_subqueue(bp->dev, queue_index);
> -       spin_unlock(&queue->tx_ptr_lock);
> +       spin_unlock_irq(&queue->tx_ptr_lock);
> 
>         return packets;
>  }
> @@ -1708,7 +1708,7 @@ static void macb_tx_restart(struct macb_queue
> *queue)
>         struct macb *bp = queue->bp;
>         unsigned int head_idx, tbqp;
> 
> -       spin_lock(&queue->tx_ptr_lock);
> +       spin_lock_irq(&queue->tx_ptr_lock);
> 
>         if (queue->tx_head == queue->tx_tail)
>                 goto out_tx_ptr_unlock;
> @@ -1720,19 +1720,19 @@ static void macb_tx_restart(struct macb_queue
> *queue)
>         if (tbqp == head_idx)
>                 goto out_tx_ptr_unlock;
> 
> -       spin_lock_irq(&bp->lock);
> +       spin_lock(&bp->lock);
>         macb_writel(bp, NCR, macb_readl(bp, NCR) | MACB_BIT(TSTART));
> -       spin_unlock_irq(&bp->lock);
> +       spin_unlock(&bp->lock);
> 
>  out_tx_ptr_unlock:
> -       spin_unlock(&queue->tx_ptr_lock);
> +       spin_unlock_irq(&queue->tx_ptr_lock);
>  }
> 
>  static bool macb_tx_complete_pending(struct macb_queue *queue)
>  {
>         bool retval = false;
> 
> -       spin_lock(&queue->tx_ptr_lock);
> +       spin_lock_irq(&queue->tx_ptr_lock);
>         if (queue->tx_head != queue->tx_tail) {
>                 /* Make hw descriptor updates visible to CPU */
>                 rmb();
> @@ -1740,7 +1740,7 @@ static bool macb_tx_complete_pending(struct
> macb_queue *queue)
>                 if (macb_tx_desc(queue, queue->tx_tail)->ctrl &
> MACB_BIT(TX_USED))
>                         retval = true;
>         }
> -       spin_unlock(&queue->tx_ptr_lock);
> +       spin_unlock_irq(&queue->tx_ptr_lock);
>         return retval;
>  }
> 
> @@ -2308,6 +2308,7 @@ static netdev_tx_t macb_start_xmit(struct
> sk_buff *skb, struct net_device *dev)
>         struct macb_queue *queue = &bp->queues[queue_index];
>         unsigned int desc_cnt, nr_frags, frag_size, f;
>         unsigned int hdrlen;
> +       unsigned long flags;
>         bool is_lso;
>         netdev_tx_t ret = NETDEV_TX_OK;
> 
> @@ -2368,7 +2369,7 @@ static netdev_tx_t macb_start_xmit(struct
> sk_buff *skb, struct net_device *dev)
>                 desc_cnt += DIV_ROUND_UP(frag_size, bp-
> >max_tx_length);
>         }
> 
> -       spin_lock_bh(&queue->tx_ptr_lock);
> +       spin_lock_irqsave(&queue->tx_ptr_lock, flags);
> 
>         /* This is a hard error, log it. */
>         if (CIRC_SPACE(queue->tx_head, queue->tx_tail,
> @@ -2392,15 +2393,15 @@ static netdev_tx_t macb_start_xmit(struct
> sk_buff *skb, struct net_device *dev)
>         netdev_tx_sent_queue(netdev_get_tx_queue(bp->dev,
> queue_index),
>                              skb->len);
> 
> -       spin_lock_irq(&bp->lock);
> +       spin_lock(&bp->lock);
>         macb_writel(bp, NCR, macb_readl(bp, NCR) | MACB_BIT(TSTART));
> -       spin_unlock_irq(&bp->lock);
> +       spin_unlock(&bp->lock);
> 
>         if (CIRC_SPACE(queue->tx_head, queue->tx_tail, bp-
> >tx_ring_size) < 1)
>                 netif_stop_subqueue(dev, queue_index);
> 
>  unlock:
> -       spin_unlock_bh(&queue->tx_ptr_lock);
> +       spin_unlock_irqrestore(&queue->tx_ptr_lock, flags);
> 
>         return ret;
>  }
> --
> 2.35.1.1320.gc452695387.dirty
> 


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] net: macb: Fix tx_ptr_lock locking
  2025-08-28 16:13 ` Robert Hancock
@ 2025-08-28 17:19   ` Sean Anderson
  0 siblings, 0 replies; 4+ messages in thread
From: Sean Anderson @ 2025-08-28 17:19 UTC (permalink / raw)
  To: Robert Hancock, netdev@vger.kernel.org, edumazet@google.com,
	andrew+netdev@lunn.ch, davem@davemloft.net, pabeni@redhat.com,
	kuba@kernel.org
  Cc: nicolas.ferre@microchip.com, efault@gmx.de,
	linux-kernel@vger.kernel.org, claudiu.beznea@tuxon.dev

On 8/28/25 12:13, Robert Hancock wrote:
> On Thu, 2025-08-28 at 12:00 -0400, Sean Anderson wrote:
>> macb_start_xmit can be called with bottom-halves disabled (e.g.
>> transmitting from softirqs) as well as with interrupts disabled (with
>> netpoll). Because of this, all other functions taking tx_ptr_lock
>> must
>> disable IRQs, and macb_start_xmit must only re-enable IRQs if they
>> were already enabled.
>> 
>> Fixes: 138badbc21a0 ("net: macb: use NAPI for TX completion path")
>> Reported-by: Mike Galbraith <efault@gmx.de>
>> Signed-off-by: Sean Anderson <sean.anderson@linux.dev>
>> ---
>> 
>>  drivers/net/ethernet/cadence/macb_main.c | 25 ++++++++++++----------
>> --
>>  1 file changed, 13 insertions(+), 12 deletions(-)
>> 
>> diff --git a/drivers/net/ethernet/cadence/macb_main.c
>> b/drivers/net/ethernet/cadence/macb_main.c
>> index 16d28a8b3b56..b0a8dfa341ea 100644
>> --- a/drivers/net/ethernet/cadence/macb_main.c
>> +++ b/drivers/net/ethernet/cadence/macb_main.c
>> @@ -1228,7 +1228,7 @@ static int macb_tx_complete(struct macb_queue
>> *queue, int budget)
>>         int packets = 0;
>>         u32 bytes = 0;
>> 
>> -       spin_lock(&queue->tx_ptr_lock);
>> +       spin_lock_irq(&queue->tx_ptr_lock);
>> 
> 
> Hm, I think I used a non-IRQ lock here to avoid potentially disabling
> interrupts for so long during TX completion processing. I don't think I
> considered the netpoll case where start_xmit can be called with IRQs
> disabled however. Not sure if there is a better solution to satisfy
> that case without turning IRQs off entirely here?

Well, we have a single producer (macb_start_xmit) so we don't need to
take a lock to enqueue anything as long as we add barriers in the right
places.

--Sean

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2025-08-28 17:20 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-28 16:00 [PATCH] net: macb: Fix tx_ptr_lock locking Sean Anderson
2025-08-28 16:01 ` Sean Anderson
2025-08-28 16:13 ` Robert Hancock
2025-08-28 17:19   ` Sean Anderson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).