public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH net-next v2 0/4] net: macb: Remove dedicated IRQ handler for WoL
@ 2026-04-02 13:41 Kevin Hao
  2026-04-02 13:41 ` [PATCH net-next v2 1/4] net: macb: Replace open-coded implementation with napi_schedule() Kevin Hao
                   ` (4 more replies)
  0 siblings, 5 replies; 6+ messages in thread
From: Kevin Hao @ 2026-04-02 13:41 UTC (permalink / raw)
  To: Nicolas Ferre, Claudiu Beznea, Andrew Lunn, David S. Miller,
	Eric Dumazet, Jakub Kicinski, Paolo Abeni
  Cc: Kevin Hao, netdev, Nicolai Buchwitz

During debugging of a suspend/resume issue, I observed that the macb driver
employs a dedicated IRQ handler for Wake-on-LAN (WoL) support. To my knowledge,
no other Ethernet driver adopts this approach. This implementation unnecessarily
complicates the suspend/resume process without providing any clear benefit.
Instead, we can easily modify the existing IRQ handler to manage WoL events,
avoiding any overhead in the TX/RX hot path.

The net throughput shows no significant difference following these changes.
The following data(net throughput and execution time of macb_interrupt) were
collected from my AMD Zynqmp board using the commands:
  taskset -c 1,2,3 iperf3 -c 192.168.3.4 -t 60 -Z -P 3 -R
  cat /sys/kernel/debug/tracing/trace_stat/function0

Before:
-------
  [SUM]   0.00-60.00  sec  5.99 GBytes   858 Mbits/sec    0             sender
  [SUM]   0.00-60.00  sec  5.99 GBytes   857 Mbits/sec                  receiver

  Function                               Hit    Time            Avg             s^2
  --------                               ---    ----            ---             ---
  macb_interrupt                      217996    678425.2 us     3.112 us        1.446 us

After:
------
  [SUM]   0.00-60.00  sec  6.00 GBytes   858 Mbits/sec    0             sender
  [SUM]   0.00-60.00  sec  5.99 GBytes   857 Mbits/sec                  receiver

  Function                               Hit    Time            Avg             s^2
  --------                               ---    ----            ---             ---
  macb_interrupt                      218212    668107.3 us     3.061 us        1.413 us

---
Changes in v2:

- The patch 2 has been reimplemented as suggested by Jakub Kicinski.

- Adjust patches 3 and 4 to align with the reimplementation of patch 2.

- Update the commit log for patch 4 to include verification steps.

- Link to v1: https://lore.kernel.org/r/20260328-macb-irq-v1-0-7b3e622fb46c@gmail.com

---
Kevin Hao (4):
      net: macb: Replace open-coded implementation with napi_schedule()
      net: macb: Introduce macb_queue_isr_clear() helper function
      net: macb: Factor out the handling of non-hot IRQ events into a separate function
      net: macb: Remove dedicated IRQ handler for WoL

 drivers/net/ethernet/cadence/macb.h      |   7 +
 drivers/net/ethernet/cadence/macb_main.c | 250 ++++++++++++-------------------
 2 files changed, 99 insertions(+), 158 deletions(-)
---
base-commit: bd0f139e5fc11182777b81cefc3893ea508544ec
change-id: 20260321-macb-irq-453ee09b3394

Best regards,
-- 
Kevin Hao <haokexin@gmail.com>


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH net-next v2 1/4] net: macb: Replace open-coded implementation with napi_schedule()
  2026-04-02 13:41 [PATCH net-next v2 0/4] net: macb: Remove dedicated IRQ handler for WoL Kevin Hao
@ 2026-04-02 13:41 ` Kevin Hao
  2026-04-02 13:41 ` [PATCH net-next v2 2/4] net: macb: Introduce macb_queue_isr_clear() helper function Kevin Hao
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 6+ messages in thread
From: Kevin Hao @ 2026-04-02 13:41 UTC (permalink / raw)
  To: Nicolas Ferre, Claudiu Beznea, Andrew Lunn, David S. Miller,
	Eric Dumazet, Jakub Kicinski, Paolo Abeni
  Cc: Kevin Hao, netdev, Nicolai Buchwitz

The driver currently duplicates the logic of napi_schedule() primarily
to include additional debug information. However, these debug details
are not essential for a specific driver and can be effectively obtained
through existing tracepoints in the networking core, such as
/sys/kernel/tracing/events/napi/napi_poll. Therefore, this patch
replaces the open-coded implementation with napi_schedule() to
simplify the driver's code.

Signed-off-by: Kevin Hao <haokexin@gmail.com>
---
 drivers/net/ethernet/cadence/macb_main.c | 10 ++--------
 1 file changed, 2 insertions(+), 8 deletions(-)

diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
index 7a48ebe0741f3b031d4c3c266cc0e565bab61211..6bfba329bc9f1c918295d44ebfa38eefe209b648 100644
--- a/drivers/net/ethernet/cadence/macb_main.c
+++ b/drivers/net/ethernet/cadence/macb_main.c
@@ -2120,10 +2120,7 @@ static irqreturn_t macb_interrupt(int irq, void *dev_id)
 			if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
 				queue_writel(queue, ISR, MACB_BIT(RCOMP));
 
-			if (napi_schedule_prep(&queue->napi_rx)) {
-				netdev_vdbg(bp->dev, "scheduling RX softirq\n");
-				__napi_schedule(&queue->napi_rx);
-			}
+			napi_schedule(&queue->napi_rx);
 		}
 
 		if (status & (MACB_BIT(TCOMP) |
@@ -2138,10 +2135,7 @@ static irqreturn_t macb_interrupt(int irq, void *dev_id)
 				wmb(); // ensure softirq can see update
 			}
 
-			if (napi_schedule_prep(&queue->napi_tx)) {
-				netdev_vdbg(bp->dev, "scheduling TX softirq\n");
-				__napi_schedule(&queue->napi_tx);
-			}
+			napi_schedule(&queue->napi_tx);
 		}
 
 		if (unlikely(status & (MACB_TX_ERR_FLAGS))) {

-- 
2.53.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH net-next v2 2/4] net: macb: Introduce macb_queue_isr_clear() helper function
  2026-04-02 13:41 [PATCH net-next v2 0/4] net: macb: Remove dedicated IRQ handler for WoL Kevin Hao
  2026-04-02 13:41 ` [PATCH net-next v2 1/4] net: macb: Replace open-coded implementation with napi_schedule() Kevin Hao
@ 2026-04-02 13:41 ` Kevin Hao
  2026-04-02 13:41 ` [PATCH net-next v2 3/4] net: macb: Factor out the handling of non-hot IRQ events into a separate function Kevin Hao
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 6+ messages in thread
From: Kevin Hao @ 2026-04-02 13:41 UTC (permalink / raw)
  To: Nicolas Ferre, Claudiu Beznea, Andrew Lunn, David S. Miller,
	Eric Dumazet, Jakub Kicinski, Paolo Abeni
  Cc: Kevin Hao, netdev, Nicolai Buchwitz

The current implementation includes several occurrences of the
following pattern:
	if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
		queue_writel(queue, ISR, value);

Introduces a helper function to consolidate these repeated code
segments. No functional changes are made.

Suggested-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Kevin Hao <haokexin@gmail.com>
---
 drivers/net/ethernet/cadence/macb.h      |  7 +++++
 drivers/net/ethernet/cadence/macb_main.c | 51 ++++++++++----------------------
 2 files changed, 22 insertions(+), 36 deletions(-)

diff --git a/drivers/net/ethernet/cadence/macb.h b/drivers/net/ethernet/cadence/macb.h
index 16527dbab875dc2f65c74987c54b5b323a0c252c..2de56017ee0d6152dbe2c4ad0d8d7a8dfadbb3d5 100644
--- a/drivers/net/ethernet/cadence/macb.h
+++ b/drivers/net/ethernet/cadence/macb.h
@@ -1474,6 +1474,13 @@ static inline bool macb_dma_ptp(struct macb *bp)
 	       bp->caps & MACB_CAPS_DMA_PTP;
 }
 
+static inline void macb_queue_isr_clear(struct macb *bp,
+					struct macb_queue *queue, u32 value)
+{
+	if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
+		queue_writel(queue, ISR, value);
+}
+
 /**
  * struct macb_platform_data - platform data for MACB Ethernet used for PCI registration
  * @pclk:		platform clock
diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
index 6bfba329bc9f1c918295d44ebfa38eefe209b648..aa25bfed67b67b1dc059e7aefcfb351bc298b3b1 100644
--- a/drivers/net/ethernet/cadence/macb_main.c
+++ b/drivers/net/ethernet/cadence/macb_main.c
@@ -1887,8 +1887,7 @@ static int macb_rx_poll(struct napi_struct *napi, int budget)
 		 */
 		if (macb_rx_pending(queue)) {
 			queue_writel(queue, IDR, bp->rx_intr_mask);
-			if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
-				queue_writel(queue, ISR, MACB_BIT(RCOMP));
+			macb_queue_isr_clear(bp, queue, MACB_BIT(RCOMP));
 			netdev_vdbg(bp->dev, "poll: packets pending, reschedule\n");
 			napi_schedule(napi);
 		}
@@ -1975,8 +1974,7 @@ static int macb_tx_poll(struct napi_struct *napi, int budget)
 		 */
 		if (macb_tx_complete_pending(queue)) {
 			queue_writel(queue, IDR, MACB_BIT(TCOMP));
-			if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
-				queue_writel(queue, ISR, MACB_BIT(TCOMP));
+			macb_queue_isr_clear(bp, queue, MACB_BIT(TCOMP));
 			netdev_vdbg(bp->dev, "TX poll: packets pending, reschedule\n");
 			napi_schedule(napi);
 		}
@@ -2043,8 +2041,7 @@ static irqreturn_t macb_wol_interrupt(int irq, void *dev_id)
 		netdev_vdbg(bp->dev, "MACB WoL: queue = %u, isr = 0x%08lx\n",
 			    (unsigned int)(queue - bp->queues),
 			    (unsigned long)status);
-		if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
-			queue_writel(queue, ISR, MACB_BIT(WOL));
+		macb_queue_isr_clear(bp, queue, MACB_BIT(WOL));
 		pm_wakeup_event(&bp->pdev->dev, 0);
 	}
 
@@ -2072,8 +2069,7 @@ static irqreturn_t gem_wol_interrupt(int irq, void *dev_id)
 		netdev_vdbg(bp->dev, "GEM WoL: queue = %u, isr = 0x%08lx\n",
 			    (unsigned int)(queue - bp->queues),
 			    (unsigned long)status);
-		if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
-			queue_writel(queue, ISR, GEM_BIT(WOL));
+		macb_queue_isr_clear(bp, queue, GEM_BIT(WOL));
 		pm_wakeup_event(&bp->pdev->dev, 0);
 	}
 
@@ -2100,8 +2096,7 @@ static irqreturn_t macb_interrupt(int irq, void *dev_id)
 		/* close possible race with dev_close */
 		if (unlikely(!netif_running(dev))) {
 			queue_writel(queue, IDR, -1);
-			if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
-				queue_writel(queue, ISR, -1);
+			macb_queue_isr_clear(bp, queue, -1);
 			break;
 		}
 
@@ -2117,19 +2112,15 @@ static irqreturn_t macb_interrupt(int irq, void *dev_id)
 			 * now.
 			 */
 			queue_writel(queue, IDR, bp->rx_intr_mask);
-			if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
-				queue_writel(queue, ISR, MACB_BIT(RCOMP));
-
+			macb_queue_isr_clear(bp, queue, MACB_BIT(RCOMP));
 			napi_schedule(&queue->napi_rx);
 		}
 
 		if (status & (MACB_BIT(TCOMP) |
 			      MACB_BIT(TXUBR))) {
 			queue_writel(queue, IDR, MACB_BIT(TCOMP));
-			if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
-				queue_writel(queue, ISR, MACB_BIT(TCOMP) |
-							 MACB_BIT(TXUBR));
-
+			macb_queue_isr_clear(bp, queue, MACB_BIT(TCOMP) |
+							MACB_BIT(TXUBR));
 			if (status & MACB_BIT(TXUBR)) {
 				queue->txubr_pending = true;
 				wmb(); // ensure softirq can see update
@@ -2141,10 +2132,7 @@ static irqreturn_t macb_interrupt(int irq, void *dev_id)
 		if (unlikely(status & (MACB_TX_ERR_FLAGS))) {
 			queue_writel(queue, IDR, MACB_TX_INT_FLAGS);
 			schedule_work(&queue->tx_error_task);
-
-			if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
-				queue_writel(queue, ISR, MACB_TX_ERR_FLAGS);
-
+			macb_queue_isr_clear(bp, queue, MACB_TX_ERR_FLAGS);
 			break;
 		}
 
@@ -2164,9 +2152,7 @@ static irqreturn_t macb_interrupt(int irq, void *dev_id)
 			macb_writel(bp, NCR, ctrl & ~MACB_BIT(RE));
 			wmb();
 			macb_writel(bp, NCR, ctrl | MACB_BIT(RE));
-
-			if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
-				queue_writel(queue, ISR, MACB_BIT(RXUBR));
+			macb_queue_isr_clear(bp, queue, MACB_BIT(RXUBR));
 		}
 
 		if (status & MACB_BIT(ISR_ROVR)) {
@@ -2177,17 +2163,13 @@ static irqreturn_t macb_interrupt(int irq, void *dev_id)
 			else
 				bp->hw_stats.macb.rx_overruns++;
 			spin_unlock(&bp->stats_lock);
-
-			if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
-				queue_writel(queue, ISR, MACB_BIT(ISR_ROVR));
+			macb_queue_isr_clear(bp, queue, MACB_BIT(ISR_ROVR));
 		}
 
 		if (status & MACB_BIT(HRESP)) {
 			queue_work(system_bh_wq, &bp->hresp_err_bh_work);
 			netdev_err(dev, "DMA bus error: HRESP not OK\n");
-
-			if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
-				queue_writel(queue, ISR, MACB_BIT(HRESP));
+			macb_queue_isr_clear(bp, queue, MACB_BIT(HRESP));
 		}
 		status = queue_readl(queue, ISR);
 	}
@@ -2883,8 +2865,7 @@ static void macb_reset_hw(struct macb *bp)
 	for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) {
 		queue_writel(queue, IDR, -1);
 		queue_readl(queue, ISR);
-		if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
-			queue_writel(queue, ISR, -1);
+		macb_queue_isr_clear(bp, queue, -1);
 	}
 }
 
@@ -6053,8 +6034,7 @@ static int __maybe_unused macb_suspend(struct device *dev)
 			/* Disable all interrupts */
 			queue_writel(queue, IDR, -1);
 			queue_readl(queue, ISR);
-			if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
-				queue_writel(queue, ISR, -1);
+			macb_queue_isr_clear(bp, queue, -1);
 		}
 		/* Enable Receive engine */
 		macb_writel(bp, NCR, tmp | MACB_BIT(RE));
@@ -6165,8 +6145,7 @@ static int __maybe_unused macb_resume(struct device *dev)
 		}
 		/* Clear ISR on queue 0 */
 		queue_readl(bp->queues, ISR);
-		if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
-			queue_writel(bp->queues, ISR, -1);
+		macb_queue_isr_clear(bp, bp->queues, -1);
 		spin_unlock_irqrestore(&bp->lock, flags);
 
 		/* Replace interrupt handler on queue 0 */

-- 
2.53.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH net-next v2 3/4] net: macb: Factor out the handling of non-hot IRQ events into a separate function
  2026-04-02 13:41 [PATCH net-next v2 0/4] net: macb: Remove dedicated IRQ handler for WoL Kevin Hao
  2026-04-02 13:41 ` [PATCH net-next v2 1/4] net: macb: Replace open-coded implementation with napi_schedule() Kevin Hao
  2026-04-02 13:41 ` [PATCH net-next v2 2/4] net: macb: Introduce macb_queue_isr_clear() helper function Kevin Hao
@ 2026-04-02 13:41 ` Kevin Hao
  2026-04-02 13:41 ` [PATCH net-next v2 4/4] net: macb: Remove dedicated IRQ handler for WoL Kevin Hao
  2026-04-03 23:10 ` [PATCH net-next v2 0/4] " patchwork-bot+netdevbpf
  4 siblings, 0 replies; 6+ messages in thread
From: Kevin Hao @ 2026-04-02 13:41 UTC (permalink / raw)
  To: Nicolas Ferre, Claudiu Beznea, Andrew Lunn, David S. Miller,
	Eric Dumazet, Jakub Kicinski, Paolo Abeni
  Cc: Kevin Hao, netdev, Nicolai Buchwitz

In the current code, the IRQ handler checks each IRQ event sequentially.
Since most IRQ events are related to TX/RX operations, while other
events occur infrequently, this approach introduces unnecessary overhead
in the hot path for TX/RX processing. This patch reduces such overhead
by extracting the handling of all non-TX/RX events into a new function
and consolidating these events under a new flag. As a result, only a
single check is required to determine whether any non-TX/RX events have
occurred. If such events exist, the handler jumps to the new function.
This optimization reduces four conditional checks to one and prevents
the instruction cache from being polluted with rarely used code in the
hot path.

Signed-off-by: Kevin Hao <haokexin@gmail.com>
---
 drivers/net/ethernet/cadence/macb_main.c | 103 ++++++++++++++++++-------------
 1 file changed, 61 insertions(+), 42 deletions(-)

diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
index aa25bfed67b67b1dc059e7aefcfb351bc298b3b1..98da191a2428b471ff9ac54e1f24beb3a882c546 100644
--- a/drivers/net/ethernet/cadence/macb_main.c
+++ b/drivers/net/ethernet/cadence/macb_main.c
@@ -70,6 +70,9 @@ struct sifive_fu540_macb_mgmt {
 #define MACB_TX_INT_FLAGS	(MACB_TX_ERR_FLAGS | MACB_BIT(TCOMP)	\
 					| MACB_BIT(TXUBR))
 
+#define MACB_INT_MISC_FLAGS	(MACB_TX_ERR_FLAGS | MACB_BIT(RXUBR) | \
+				 MACB_BIT(ISR_ROVR) | MACB_BIT(HRESP))
+
 /* Max length of transmit frame must be a multiple of 8 bytes */
 #define MACB_TX_LEN_ALIGN	8
 #define MACB_MAX_TX_LEN		((unsigned int)((1 << MACB_TX_FRMLEN_SIZE) - 1) & ~((unsigned int)(MACB_TX_LEN_ALIGN - 1)))
@@ -2078,12 +2081,66 @@ static irqreturn_t gem_wol_interrupt(int irq, void *dev_id)
 	return IRQ_HANDLED;
 }
 
+static int macb_interrupt_misc(struct macb_queue *queue, u32 status)
+{
+	struct macb *bp = queue->bp;
+	struct net_device *dev;
+	u32 ctrl;
+
+	dev = bp->dev;
+
+	if (unlikely(status & (MACB_TX_ERR_FLAGS))) {
+		queue_writel(queue, IDR, MACB_TX_INT_FLAGS);
+		schedule_work(&queue->tx_error_task);
+		macb_queue_isr_clear(bp, queue, MACB_TX_ERR_FLAGS);
+		return -1;
+	}
+
+	/* Link change detection isn't possible with RMII, so we'll
+	 * add that if/when we get our hands on a full-blown MII PHY.
+	 */
+
+	/* There is a hardware issue under heavy load where DMA can
+	 * stop, this causes endless "used buffer descriptor read"
+	 * interrupts but it can be cleared by re-enabling RX. See
+	 * the at91rm9200 manual, section 41.3.1 or the Zynq manual
+	 * section 16.7.4 for details. RXUBR is only enabled for
+	 * these two versions.
+	 */
+	if (status & MACB_BIT(RXUBR)) {
+		ctrl = macb_readl(bp, NCR);
+		macb_writel(bp, NCR, ctrl & ~MACB_BIT(RE));
+		wmb();
+		macb_writel(bp, NCR, ctrl | MACB_BIT(RE));
+		macb_queue_isr_clear(bp, queue, MACB_BIT(RXUBR));
+	}
+
+	if (status & MACB_BIT(ISR_ROVR)) {
+		/* We missed at least one packet */
+		spin_lock(&bp->stats_lock);
+		if (macb_is_gem(bp))
+			bp->hw_stats.gem.rx_overruns++;
+		else
+			bp->hw_stats.macb.rx_overruns++;
+		spin_unlock(&bp->stats_lock);
+		macb_queue_isr_clear(bp, queue, MACB_BIT(ISR_ROVR));
+	}
+
+	if (status & MACB_BIT(HRESP)) {
+		queue_work(system_bh_wq, &bp->hresp_err_bh_work);
+		netdev_err(dev, "DMA bus error: HRESP not OK\n");
+		macb_queue_isr_clear(bp, queue, MACB_BIT(HRESP));
+	}
+
+	return 0;
+}
+
 static irqreturn_t macb_interrupt(int irq, void *dev_id)
 {
 	struct macb_queue *queue = dev_id;
 	struct macb *bp = queue->bp;
 	struct net_device *dev = bp->dev;
-	u32 status, ctrl;
+	u32 status;
 
 	status = queue_readl(queue, ISR);
 
@@ -2129,48 +2186,10 @@ static irqreturn_t macb_interrupt(int irq, void *dev_id)
 			napi_schedule(&queue->napi_tx);
 		}
 
-		if (unlikely(status & (MACB_TX_ERR_FLAGS))) {
-			queue_writel(queue, IDR, MACB_TX_INT_FLAGS);
-			schedule_work(&queue->tx_error_task);
-			macb_queue_isr_clear(bp, queue, MACB_TX_ERR_FLAGS);
-			break;
-		}
+		if (unlikely(status & MACB_INT_MISC_FLAGS))
+			if (macb_interrupt_misc(queue, status))
+				break;
 
-		/* Link change detection isn't possible with RMII, so we'll
-		 * add that if/when we get our hands on a full-blown MII PHY.
-		 */
-
-		/* There is a hardware issue under heavy load where DMA can
-		 * stop, this causes endless "used buffer descriptor read"
-		 * interrupts but it can be cleared by re-enabling RX. See
-		 * the at91rm9200 manual, section 41.3.1 or the Zynq manual
-		 * section 16.7.4 for details. RXUBR is only enabled for
-		 * these two versions.
-		 */
-		if (status & MACB_BIT(RXUBR)) {
-			ctrl = macb_readl(bp, NCR);
-			macb_writel(bp, NCR, ctrl & ~MACB_BIT(RE));
-			wmb();
-			macb_writel(bp, NCR, ctrl | MACB_BIT(RE));
-			macb_queue_isr_clear(bp, queue, MACB_BIT(RXUBR));
-		}
-
-		if (status & MACB_BIT(ISR_ROVR)) {
-			/* We missed at least one packet */
-			spin_lock(&bp->stats_lock);
-			if (macb_is_gem(bp))
-				bp->hw_stats.gem.rx_overruns++;
-			else
-				bp->hw_stats.macb.rx_overruns++;
-			spin_unlock(&bp->stats_lock);
-			macb_queue_isr_clear(bp, queue, MACB_BIT(ISR_ROVR));
-		}
-
-		if (status & MACB_BIT(HRESP)) {
-			queue_work(system_bh_wq, &bp->hresp_err_bh_work);
-			netdev_err(dev, "DMA bus error: HRESP not OK\n");
-			macb_queue_isr_clear(bp, queue, MACB_BIT(HRESP));
-		}
 		status = queue_readl(queue, ISR);
 	}
 

-- 
2.53.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH net-next v2 4/4] net: macb: Remove dedicated IRQ handler for WoL
  2026-04-02 13:41 [PATCH net-next v2 0/4] net: macb: Remove dedicated IRQ handler for WoL Kevin Hao
                   ` (2 preceding siblings ...)
  2026-04-02 13:41 ` [PATCH net-next v2 3/4] net: macb: Factor out the handling of non-hot IRQ events into a separate function Kevin Hao
@ 2026-04-02 13:41 ` Kevin Hao
  2026-04-03 23:10 ` [PATCH net-next v2 0/4] " patchwork-bot+netdevbpf
  4 siblings, 0 replies; 6+ messages in thread
From: Kevin Hao @ 2026-04-02 13:41 UTC (permalink / raw)
  To: Nicolas Ferre, Claudiu Beznea, Andrew Lunn, David S. Miller,
	Eric Dumazet, Jakub Kicinski, Paolo Abeni
  Cc: Kevin Hao, netdev, Nicolai Buchwitz

In the current implementation, the suspend/resume path frees the
existing IRQ handler and sets up a dedicated WoL IRQ handler, then
restores the original handler upon resume. This approach is not used
by any other Ethernet driver and unnecessarily complicates the
suspend/resume process. After adjusting the IRQ handler in the previous
patches, we can now handle WoL interrupts without introducing any
overhead in the TX/RX hot path. Therefore, the dedicated WoL IRQ
handler is removed.

I have verified WoL functionality on my AMD ZynqMP board using the
following steps:
  root@amd-zynqmp:~# ifconfig end0 192.168.3.3
  root@amd-zynqmp:~# ethtool -s end0 wol a
  root@amd-zynqmp:~# echo mem >/sys/power/state

Signed-off-by: Kevin Hao <haokexin@gmail.com>
---
 drivers/net/ethernet/cadence/macb_main.c | 112 ++++++++-----------------------
 1 file changed, 27 insertions(+), 85 deletions(-)

diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
index 98da191a2428b471ff9ac54e1f24beb3a882c546..a4961abb95ae7fa2e40e61b5a8f0da5a83572131 100644
--- a/drivers/net/ethernet/cadence/macb_main.c
+++ b/drivers/net/ethernet/cadence/macb_main.c
@@ -71,7 +71,8 @@ struct sifive_fu540_macb_mgmt {
 					| MACB_BIT(TXUBR))
 
 #define MACB_INT_MISC_FLAGS	(MACB_TX_ERR_FLAGS | MACB_BIT(RXUBR) | \
-				 MACB_BIT(ISR_ROVR) | MACB_BIT(HRESP))
+				 MACB_BIT(ISR_ROVR) | MACB_BIT(HRESP) | \
+				 GEM_BIT(WOL) | MACB_BIT(WOL))
 
 /* Max length of transmit frame must be a multiple of 8 bytes */
 #define MACB_TX_LEN_ALIGN	8
@@ -2025,60 +2026,30 @@ static void macb_hresp_error_task(struct work_struct *work)
 	netif_tx_start_all_queues(dev);
 }
 
-static irqreturn_t macb_wol_interrupt(int irq, void *dev_id)
+static void macb_wol_interrupt(struct macb_queue *queue, u32 status)
 {
-	struct macb_queue *queue = dev_id;
 	struct macb *bp = queue->bp;
-	u32 status;
 
-	status = queue_readl(queue, ISR);
-
-	if (unlikely(!status))
-		return IRQ_NONE;
-
-	spin_lock(&bp->lock);
-
-	if (status & MACB_BIT(WOL)) {
-		queue_writel(queue, IDR, MACB_BIT(WOL));
-		macb_writel(bp, WOL, 0);
-		netdev_vdbg(bp->dev, "MACB WoL: queue = %u, isr = 0x%08lx\n",
-			    (unsigned int)(queue - bp->queues),
-			    (unsigned long)status);
-		macb_queue_isr_clear(bp, queue, MACB_BIT(WOL));
-		pm_wakeup_event(&bp->pdev->dev, 0);
-	}
-
-	spin_unlock(&bp->lock);
-
-	return IRQ_HANDLED;
+	queue_writel(queue, IDR, MACB_BIT(WOL));
+	macb_writel(bp, WOL, 0);
+	netdev_vdbg(bp->dev, "MACB WoL: queue = %u, isr = 0x%08lx\n",
+		    (unsigned int)(queue - bp->queues),
+		    (unsigned long)status);
+	macb_queue_isr_clear(bp, queue, MACB_BIT(WOL));
+	pm_wakeup_event(&bp->pdev->dev, 0);
 }
 
-static irqreturn_t gem_wol_interrupt(int irq, void *dev_id)
+static void gem_wol_interrupt(struct macb_queue *queue, u32 status)
 {
-	struct macb_queue *queue = dev_id;
 	struct macb *bp = queue->bp;
-	u32 status;
 
-	status = queue_readl(queue, ISR);
-
-	if (unlikely(!status))
-		return IRQ_NONE;
-
-	spin_lock(&bp->lock);
-
-	if (status & GEM_BIT(WOL)) {
-		queue_writel(queue, IDR, GEM_BIT(WOL));
-		gem_writel(bp, WOL, 0);
-		netdev_vdbg(bp->dev, "GEM WoL: queue = %u, isr = 0x%08lx\n",
-			    (unsigned int)(queue - bp->queues),
-			    (unsigned long)status);
-		macb_queue_isr_clear(bp, queue, GEM_BIT(WOL));
-		pm_wakeup_event(&bp->pdev->dev, 0);
-	}
-
-	spin_unlock(&bp->lock);
-
-	return IRQ_HANDLED;
+	queue_writel(queue, IDR, GEM_BIT(WOL));
+	gem_writel(bp, WOL, 0);
+	netdev_vdbg(bp->dev, "GEM WoL: queue = %u, isr = 0x%08lx\n",
+		    (unsigned int)(queue - bp->queues),
+		    (unsigned long)status);
+	macb_queue_isr_clear(bp, queue, GEM_BIT(WOL));
+	pm_wakeup_event(&bp->pdev->dev, 0);
 }
 
 static int macb_interrupt_misc(struct macb_queue *queue, u32 status)
@@ -2132,6 +2103,14 @@ static int macb_interrupt_misc(struct macb_queue *queue, u32 status)
 		macb_queue_isr_clear(bp, queue, MACB_BIT(HRESP));
 	}
 
+	if (macb_is_gem(bp)) {
+		if (status & GEM_BIT(WOL))
+			gem_wol_interrupt(queue, status);
+	} else {
+		if (status & MACB_BIT(WOL))
+			macb_wol_interrupt(queue, status);
+	}
+
 	return 0;
 }
 
@@ -6004,7 +5983,6 @@ static int __maybe_unused macb_suspend(struct device *dev)
 	unsigned long flags;
 	u32 tmp, ifa_local;
 	unsigned int q;
-	int err;
 
 	if (!device_may_wakeup(&bp->dev->dev))
 		phy_exit(bp->phy);
@@ -6067,39 +6045,15 @@ static int __maybe_unused macb_suspend(struct device *dev)
 			/* write IP address into register */
 			tmp |= MACB_BFEXT(IP, ifa_local);
 		}
-		spin_unlock_irqrestore(&bp->lock, flags);
 
-		/* Change interrupt handler and
-		 * Enable WoL IRQ on queue 0
-		 */
-		devm_free_irq(dev, bp->queues[0].irq, bp->queues);
 		if (macb_is_gem(bp)) {
-			err = devm_request_irq(dev, bp->queues[0].irq, gem_wol_interrupt,
-					       IRQF_SHARED, netdev->name, bp->queues);
-			if (err) {
-				dev_err(dev,
-					"Unable to request IRQ %d (error %d)\n",
-					bp->queues[0].irq, err);
-				return err;
-			}
-			spin_lock_irqsave(&bp->lock, flags);
 			queue_writel(bp->queues, IER, GEM_BIT(WOL));
 			gem_writel(bp, WOL, tmp);
-			spin_unlock_irqrestore(&bp->lock, flags);
 		} else {
-			err = devm_request_irq(dev, bp->queues[0].irq, macb_wol_interrupt,
-					       IRQF_SHARED, netdev->name, bp->queues);
-			if (err) {
-				dev_err(dev,
-					"Unable to request IRQ %d (error %d)\n",
-					bp->queues[0].irq, err);
-				return err;
-			}
-			spin_lock_irqsave(&bp->lock, flags);
 			queue_writel(bp->queues, IER, MACB_BIT(WOL));
 			macb_writel(bp, WOL, tmp);
-			spin_unlock_irqrestore(&bp->lock, flags);
 		}
+		spin_unlock_irqrestore(&bp->lock, flags);
 
 		enable_irq_wake(bp->queues[0].irq);
 	}
@@ -6141,7 +6095,6 @@ static int __maybe_unused macb_resume(struct device *dev)
 	struct macb_queue *queue;
 	unsigned long flags;
 	unsigned int q;
-	int err;
 
 	if (!device_may_wakeup(&bp->dev->dev))
 		phy_init(bp->phy);
@@ -6167,17 +6120,6 @@ static int __maybe_unused macb_resume(struct device *dev)
 		macb_queue_isr_clear(bp, bp->queues, -1);
 		spin_unlock_irqrestore(&bp->lock, flags);
 
-		/* Replace interrupt handler on queue 0 */
-		devm_free_irq(dev, bp->queues[0].irq, bp->queues);
-		err = devm_request_irq(dev, bp->queues[0].irq, macb_interrupt,
-				       IRQF_SHARED, netdev->name, bp->queues);
-		if (err) {
-			dev_err(dev,
-				"Unable to request IRQ %d (error %d)\n",
-				bp->queues[0].irq, err);
-			return err;
-		}
-
 		disable_irq_wake(bp->queues[0].irq);
 
 		/* Now make sure we disable phy before moving

-- 
2.53.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH net-next v2 0/4] net: macb: Remove dedicated IRQ handler for WoL
  2026-04-02 13:41 [PATCH net-next v2 0/4] net: macb: Remove dedicated IRQ handler for WoL Kevin Hao
                   ` (3 preceding siblings ...)
  2026-04-02 13:41 ` [PATCH net-next v2 4/4] net: macb: Remove dedicated IRQ handler for WoL Kevin Hao
@ 2026-04-03 23:10 ` patchwork-bot+netdevbpf
  4 siblings, 0 replies; 6+ messages in thread
From: patchwork-bot+netdevbpf @ 2026-04-03 23:10 UTC (permalink / raw)
  To: Kevin Hao
  Cc: nicolas.ferre, claudiu.beznea, andrew+netdev, davem, edumazet,
	kuba, pabeni, netdev, nb

Hello:

This series was applied to netdev/net-next.git (main)
by Jakub Kicinski <kuba@kernel.org>:

On Thu, 02 Apr 2026 21:41:21 +0800 you wrote:
> During debugging of a suspend/resume issue, I observed that the macb driver
> employs a dedicated IRQ handler for Wake-on-LAN (WoL) support. To my knowledge,
> no other Ethernet driver adopts this approach. This implementation unnecessarily
> complicates the suspend/resume process without providing any clear benefit.
> Instead, we can easily modify the existing IRQ handler to manage WoL events,
> avoiding any overhead in the TX/RX hot path.
> 
> [...]

Here is the summary with links:
  - [net-next,v2,1/4] net: macb: Replace open-coded implementation with napi_schedule()
    https://git.kernel.org/netdev/net-next/c/dc3bd465ea36
  - [net-next,v2,2/4] net: macb: Introduce macb_queue_isr_clear() helper function
    https://git.kernel.org/netdev/net-next/c/5986ff6e4136
  - [net-next,v2,3/4] net: macb: Factor out the handling of non-hot IRQ events into a separate function
    https://git.kernel.org/netdev/net-next/c/6d55ce805b26
  - [net-next,v2,4/4] net: macb: Remove dedicated IRQ handler for WoL
    https://git.kernel.org/netdev/net-next/c/6637c03f35fa

You are awesome, thank you!
-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2026-04-03 23:11 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-02 13:41 [PATCH net-next v2 0/4] net: macb: Remove dedicated IRQ handler for WoL Kevin Hao
2026-04-02 13:41 ` [PATCH net-next v2 1/4] net: macb: Replace open-coded implementation with napi_schedule() Kevin Hao
2026-04-02 13:41 ` [PATCH net-next v2 2/4] net: macb: Introduce macb_queue_isr_clear() helper function Kevin Hao
2026-04-02 13:41 ` [PATCH net-next v2 3/4] net: macb: Factor out the handling of non-hot IRQ events into a separate function Kevin Hao
2026-04-02 13:41 ` [PATCH net-next v2 4/4] net: macb: Remove dedicated IRQ handler for WoL Kevin Hao
2026-04-03 23:10 ` [PATCH net-next v2 0/4] " patchwork-bot+netdevbpf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox