* [PATCH net-next 0/4] net: macb: Remove dedicated IRQ handler for WoL
@ 2026-03-28 10:17 Kevin Hao
2026-03-28 10:17 ` [PATCH net-next 1/4] net: macb: Replace open-coded implementation with napi_schedule() Kevin Hao
` (4 more replies)
0 siblings, 5 replies; 14+ messages in thread
From: Kevin Hao @ 2026-03-28 10:17 UTC (permalink / raw)
To: Nicolas Ferre, Claudiu Beznea, Andrew Lunn, David S. Miller,
Eric Dumazet, Jakub Kicinski, Paolo Abeni
Cc: Kevin Hao, netdev
During debugging of a suspend/resume issue, I observed that the macb driver
employs a dedicated IRQ handler for Wake-on-LAN (WoL) support. To my knowledge,
no other Ethernet driver adopts this approach. This implementation unnecessarily
complicates the suspend/resume process without providing any clear benefit.
Instead, we can easily modify the existing IRQ handler to manage WoL events,
avoiding any overhead in the TX/RX hot path.
I am skeptical that the minor optimizations to the IRQ handler proposed in this
patch series would yield any measurable performance improvement. However, it
does appear that the execution time of the macb_interrupt() function is
slightly reduced.
The following data(net throughput and execution time of macb_interrupt) were
collected from my AMD Zynqmp board using the commands:
taskset -c 1,2,3 iperf3 -c 192.168.3.4 -t 60 -Z -P 3 -R
cat /sys/kernel/debug/tracing/trace_stat/function0
Before:
-------
[SUM] 0.00-60.00 sec 5.99 GBytes 857 Mbits/sec 0 sender
[SUM] 0.00-60.00 sec 5.98 GBytes 856 Mbits/sec receiver
Function Hit Time Avg s^2
-------- --- ---- --- ---
macb_interrupt 218538 723327.5 us 3.309 us 1.022 us
After:
------
[SUM] 0.00-60.00 sec 5.99 GBytes 857 Mbits/sec 0 sender
[SUM] 0.00-60.00 sec 5.98 GBytes 857 Mbits/sec receiver
Function Hit Time Avg s^2
-------- --- ---- --- ---
macb_interrupt 218558 646355.1 us 2.957 us 1.290 us
---
Kevin Hao (4):
net: macb: Replace open-coded implementation with napi_schedule()
net: macb: Consolidate MACB_CAPS_ISR_CLEAR_ON_WRITE checks in IRQ handler
net: macb: Factor out the handling of non-hot IRQ events into a separate function
net: macb: Remove dedicated IRQ handler for WoL
drivers/net/ethernet/cadence/macb_main.c | 244 +++++++++++++------------------
1 file changed, 102 insertions(+), 142 deletions(-)
---
base-commit: 3b058d1aeeeff27a7289529c4944291613b364e9
change-id: 20260321-macb-irq-453ee09b3394
Best regards,
--
Kevin Hao <haokexin@gmail.com>
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH net-next 1/4] net: macb: Replace open-coded implementation with napi_schedule()
2026-03-28 10:17 [PATCH net-next 0/4] net: macb: Remove dedicated IRQ handler for WoL Kevin Hao
@ 2026-03-28 10:17 ` Kevin Hao
2026-03-28 10:17 ` [PATCH net-next 2/4] net: macb: Consolidate MACB_CAPS_ISR_CLEAR_ON_WRITE checks in IRQ handler Kevin Hao
` (3 subsequent siblings)
4 siblings, 0 replies; 14+ messages in thread
From: Kevin Hao @ 2026-03-28 10:17 UTC (permalink / raw)
To: Nicolas Ferre, Claudiu Beznea, Andrew Lunn, David S. Miller,
Eric Dumazet, Jakub Kicinski, Paolo Abeni
Cc: Kevin Hao, netdev
The driver currently duplicates the logic of napi_schedule() primarily
to include additional debug information. However, these debug details
are not essential for a specific driver and can be effectively obtained
through existing tracepoints in the networking core, such as
/sys/kernel/tracing/events/napi/napi_poll. Therefore, this patch
replaces the open-coded implementation with napi_schedule() to
simplify the driver's code.
Signed-off-by: Kevin Hao <haokexin@gmail.com>
---
drivers/net/ethernet/cadence/macb_main.c | 10 ++--------
1 file changed, 2 insertions(+), 8 deletions(-)
diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
index c8182559edf602e7f5d94644cffb22e8f58423cc..886246a6f6bdd0b6a8cb4b86d7788ac181ee602a 100644
--- a/drivers/net/ethernet/cadence/macb_main.c
+++ b/drivers/net/ethernet/cadence/macb_main.c
@@ -2120,10 +2120,7 @@ static irqreturn_t macb_interrupt(int irq, void *dev_id)
if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
queue_writel(queue, ISR, MACB_BIT(RCOMP));
- if (napi_schedule_prep(&queue->napi_rx)) {
- netdev_vdbg(bp->dev, "scheduling RX softirq\n");
- __napi_schedule(&queue->napi_rx);
- }
+ napi_schedule(&queue->napi_rx);
}
if (status & (MACB_BIT(TCOMP) |
@@ -2138,10 +2135,7 @@ static irqreturn_t macb_interrupt(int irq, void *dev_id)
wmb(); // ensure softirq can see update
}
- if (napi_schedule_prep(&queue->napi_tx)) {
- netdev_vdbg(bp->dev, "scheduling TX softirq\n");
- __napi_schedule(&queue->napi_tx);
- }
+ napi_schedule(&queue->napi_tx);
}
if (unlikely(status & (MACB_TX_ERR_FLAGS))) {
--
2.53.0
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH net-next 2/4] net: macb: Consolidate MACB_CAPS_ISR_CLEAR_ON_WRITE checks in IRQ handler
2026-03-28 10:17 [PATCH net-next 0/4] net: macb: Remove dedicated IRQ handler for WoL Kevin Hao
2026-03-28 10:17 ` [PATCH net-next 1/4] net: macb: Replace open-coded implementation with napi_schedule() Kevin Hao
@ 2026-03-28 10:17 ` Kevin Hao
2026-04-01 2:54 ` Jakub Kicinski
2026-03-28 10:17 ` [PATCH net-next 3/4] net: macb: Factor out the handling of non-hot IRQ events into a separate function Kevin Hao
` (2 subsequent siblings)
4 siblings, 1 reply; 14+ messages in thread
From: Kevin Hao @ 2026-03-28 10:17 UTC (permalink / raw)
To: Nicolas Ferre, Claudiu Beznea, Andrew Lunn, David S. Miller,
Eric Dumazet, Jakub Kicinski, Paolo Abeni
Cc: Kevin Hao, netdev
Currently, the MACB_CAPS_ISR_CLEAR_ON_WRITE flag is checked in every
branch of the IRQ handler. This repeated evaluation is unnecessary.
By consolidating the flag check, we eliminate redundant loads of
bp->caps when TX and RX events occur simultaneously, a common scenario
under high network throughput. Additionally, this optimization reduces
the function size from 0x2e8 to 0x2c4.
Signed-off-by: Kevin Hao <haokexin@gmail.com>
---
drivers/net/ethernet/cadence/macb_main.c | 17 ++++++++++-------
1 file changed, 10 insertions(+), 7 deletions(-)
diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
index 886246a6f6bdd0b6a8cb4b86d7788ac181ee602a..743abe11324c690c11993d7be9ed5b73422dd17c 100644
--- a/drivers/net/ethernet/cadence/macb_main.c
+++ b/drivers/net/ethernet/cadence/macb_main.c
@@ -2088,19 +2088,22 @@ static irqreturn_t macb_interrupt(int irq, void *dev_id)
struct macb *bp = queue->bp;
struct net_device *dev = bp->dev;
u32 status, ctrl;
+ bool isr_clear;
status = queue_readl(queue, ISR);
if (unlikely(!status))
return IRQ_NONE;
+ isr_clear = bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE;
+
spin_lock(&bp->lock);
while (status) {
/* close possible race with dev_close */
if (unlikely(!netif_running(dev))) {
queue_writel(queue, IDR, -1);
- if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
+ if (isr_clear)
queue_writel(queue, ISR, -1);
break;
}
@@ -2117,7 +2120,7 @@ static irqreturn_t macb_interrupt(int irq, void *dev_id)
* now.
*/
queue_writel(queue, IDR, bp->rx_intr_mask);
- if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
+ if (isr_clear)
queue_writel(queue, ISR, MACB_BIT(RCOMP));
napi_schedule(&queue->napi_rx);
@@ -2126,7 +2129,7 @@ static irqreturn_t macb_interrupt(int irq, void *dev_id)
if (status & (MACB_BIT(TCOMP) |
MACB_BIT(TXUBR))) {
queue_writel(queue, IDR, MACB_BIT(TCOMP));
- if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
+ if (isr_clear)
queue_writel(queue, ISR, MACB_BIT(TCOMP) |
MACB_BIT(TXUBR));
@@ -2142,7 +2145,7 @@ static irqreturn_t macb_interrupt(int irq, void *dev_id)
queue_writel(queue, IDR, MACB_TX_INT_FLAGS);
schedule_work(&queue->tx_error_task);
- if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
+ if (isr_clear)
queue_writel(queue, ISR, MACB_TX_ERR_FLAGS);
break;
@@ -2165,7 +2168,7 @@ static irqreturn_t macb_interrupt(int irq, void *dev_id)
wmb();
macb_writel(bp, NCR, ctrl | MACB_BIT(RE));
- if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
+ if (isr_clear)
queue_writel(queue, ISR, MACB_BIT(RXUBR));
}
@@ -2178,7 +2181,7 @@ static irqreturn_t macb_interrupt(int irq, void *dev_id)
bp->hw_stats.macb.rx_overruns++;
spin_unlock(&bp->stats_lock);
- if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
+ if (isr_clear)
queue_writel(queue, ISR, MACB_BIT(ISR_ROVR));
}
@@ -2186,7 +2189,7 @@ static irqreturn_t macb_interrupt(int irq, void *dev_id)
queue_work(system_bh_wq, &bp->hresp_err_bh_work);
netdev_err(dev, "DMA bus error: HRESP not OK\n");
- if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
+ if (isr_clear)
queue_writel(queue, ISR, MACB_BIT(HRESP));
}
status = queue_readl(queue, ISR);
--
2.53.0
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH net-next 3/4] net: macb: Factor out the handling of non-hot IRQ events into a separate function
2026-03-28 10:17 [PATCH net-next 0/4] net: macb: Remove dedicated IRQ handler for WoL Kevin Hao
2026-03-28 10:17 ` [PATCH net-next 1/4] net: macb: Replace open-coded implementation with napi_schedule() Kevin Hao
2026-03-28 10:17 ` [PATCH net-next 2/4] net: macb: Consolidate MACB_CAPS_ISR_CLEAR_ON_WRITE checks in IRQ handler Kevin Hao
@ 2026-03-28 10:17 ` Kevin Hao
2026-04-01 2:54 ` Jakub Kicinski
2026-03-28 10:17 ` [PATCH net-next 4/4] net: macb: Remove dedicated IRQ handler for WoL Kevin Hao
2026-04-03 16:17 ` [PATCH net-next 0/4] " Simon Horman
4 siblings, 1 reply; 14+ messages in thread
From: Kevin Hao @ 2026-03-28 10:17 UTC (permalink / raw)
To: Nicolas Ferre, Claudiu Beznea, Andrew Lunn, David S. Miller,
Eric Dumazet, Jakub Kicinski, Paolo Abeni
Cc: Kevin Hao, netdev
In the current code, the IRQ handler checks each IRQ event sequentially.
Since most IRQ events are related to TX/RX operations, while other
events occur infrequently, this approach introduces unnecessary overhead
in the hot path for TX/RX processing. This patch reduces such overhead
by extracting the handling of all non-TX/RX events into a new function
and consolidating these events under a new flag. As a result, only a
single check is required to determine whether any non-TX/RX events have
occurred. If such events exist, the handler jumps to the new function.
This optimization reduces four conditional checks to one and prevents
the instruction cache from being polluted with rarely used code in the
hot path.
Signed-off-by: Kevin Hao <haokexin@gmail.com>
---
drivers/net/ethernet/cadence/macb_main.c | 123 ++++++++++++++++++-------------
1 file changed, 72 insertions(+), 51 deletions(-)
diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
index 743abe11324c690c11993d7be9ed5b73422dd17c..c53b28b42a46489722461957625e8377be63e427 100644
--- a/drivers/net/ethernet/cadence/macb_main.c
+++ b/drivers/net/ethernet/cadence/macb_main.c
@@ -70,6 +70,9 @@ struct sifive_fu540_macb_mgmt {
#define MACB_TX_INT_FLAGS (MACB_TX_ERR_FLAGS | MACB_BIT(TCOMP) \
| MACB_BIT(TXUBR))
+#define MACB_INT_MISC_FLAGS (MACB_TX_ERR_FLAGS | MACB_BIT(RXUBR) | \
+ MACB_BIT(ISR_ROVR) | MACB_BIT(HRESP))
+
/* Max length of transmit frame must be a multiple of 8 bytes */
#define MACB_TX_LEN_ALIGN 8
#define MACB_MAX_TX_LEN ((unsigned int)((1 << MACB_TX_FRMLEN_SIZE) - 1) & ~((unsigned int)(MACB_TX_LEN_ALIGN - 1)))
@@ -2082,12 +2085,77 @@ static irqreturn_t gem_wol_interrupt(int irq, void *dev_id)
return IRQ_HANDLED;
}
+static int macb_interrupt_misc(struct macb_queue *queue, u32 status)
+{
+ struct macb *bp = queue->bp;
+ struct net_device *dev;
+ bool isr_clear;
+ u32 ctrl;
+
+ dev = bp->dev;
+ isr_clear = bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE;
+
+ if (unlikely(status & (MACB_TX_ERR_FLAGS))) {
+ queue_writel(queue, IDR, MACB_TX_INT_FLAGS);
+ schedule_work(&queue->tx_error_task);
+
+ if (isr_clear)
+ queue_writel(queue, ISR, MACB_TX_ERR_FLAGS);
+
+ return -1;
+ }
+
+ /* Link change detection isn't possible with RMII, so we'll
+ * add that if/when we get our hands on a full-blown MII PHY.
+ */
+
+ /* There is a hardware issue under heavy load where DMA can
+ * stop, this causes endless "used buffer descriptor read"
+ * interrupts but it can be cleared by re-enabling RX. See
+ * the at91rm9200 manual, section 41.3.1 or the Zynq manual
+ * section 16.7.4 for details. RXUBR is only enabled for
+ * these two versions.
+ */
+ if (status & MACB_BIT(RXUBR)) {
+ ctrl = macb_readl(bp, NCR);
+ macb_writel(bp, NCR, ctrl & ~MACB_BIT(RE));
+ wmb();
+ macb_writel(bp, NCR, ctrl | MACB_BIT(RE));
+
+ if (isr_clear)
+ queue_writel(queue, ISR, MACB_BIT(RXUBR));
+ }
+
+ if (status & MACB_BIT(ISR_ROVR)) {
+ /* We missed at least one packet */
+ spin_lock(&bp->stats_lock);
+ if (macb_is_gem(bp))
+ bp->hw_stats.gem.rx_overruns++;
+ else
+ bp->hw_stats.macb.rx_overruns++;
+ spin_unlock(&bp->stats_lock);
+
+ if (isr_clear)
+ queue_writel(queue, ISR, MACB_BIT(ISR_ROVR));
+ }
+
+ if (status & MACB_BIT(HRESP)) {
+ queue_work(system_bh_wq, &bp->hresp_err_bh_work);
+ netdev_err(dev, "DMA bus error: HRESP not OK\n");
+
+ if (isr_clear)
+ queue_writel(queue, ISR, MACB_BIT(HRESP));
+ }
+
+ return 0;
+}
+
static irqreturn_t macb_interrupt(int irq, void *dev_id)
{
struct macb_queue *queue = dev_id;
struct macb *bp = queue->bp;
struct net_device *dev = bp->dev;
- u32 status, ctrl;
+ u32 status;
bool isr_clear;
status = queue_readl(queue, ISR);
@@ -2141,57 +2209,10 @@ static irqreturn_t macb_interrupt(int irq, void *dev_id)
napi_schedule(&queue->napi_tx);
}
- if (unlikely(status & (MACB_TX_ERR_FLAGS))) {
- queue_writel(queue, IDR, MACB_TX_INT_FLAGS);
- schedule_work(&queue->tx_error_task);
+ if (unlikely(status & MACB_INT_MISC_FLAGS))
+ if (macb_interrupt_misc(queue, status))
+ break;
- if (isr_clear)
- queue_writel(queue, ISR, MACB_TX_ERR_FLAGS);
-
- break;
- }
-
- /* Link change detection isn't possible with RMII, so we'll
- * add that if/when we get our hands on a full-blown MII PHY.
- */
-
- /* There is a hardware issue under heavy load where DMA can
- * stop, this causes endless "used buffer descriptor read"
- * interrupts but it can be cleared by re-enabling RX. See
- * the at91rm9200 manual, section 41.3.1 or the Zynq manual
- * section 16.7.4 for details. RXUBR is only enabled for
- * these two versions.
- */
- if (status & MACB_BIT(RXUBR)) {
- ctrl = macb_readl(bp, NCR);
- macb_writel(bp, NCR, ctrl & ~MACB_BIT(RE));
- wmb();
- macb_writel(bp, NCR, ctrl | MACB_BIT(RE));
-
- if (isr_clear)
- queue_writel(queue, ISR, MACB_BIT(RXUBR));
- }
-
- if (status & MACB_BIT(ISR_ROVR)) {
- /* We missed at least one packet */
- spin_lock(&bp->stats_lock);
- if (macb_is_gem(bp))
- bp->hw_stats.gem.rx_overruns++;
- else
- bp->hw_stats.macb.rx_overruns++;
- spin_unlock(&bp->stats_lock);
-
- if (isr_clear)
- queue_writel(queue, ISR, MACB_BIT(ISR_ROVR));
- }
-
- if (status & MACB_BIT(HRESP)) {
- queue_work(system_bh_wq, &bp->hresp_err_bh_work);
- netdev_err(dev, "DMA bus error: HRESP not OK\n");
-
- if (isr_clear)
- queue_writel(queue, ISR, MACB_BIT(HRESP));
- }
status = queue_readl(queue, ISR);
}
--
2.53.0
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH net-next 4/4] net: macb: Remove dedicated IRQ handler for WoL
2026-03-28 10:17 [PATCH net-next 0/4] net: macb: Remove dedicated IRQ handler for WoL Kevin Hao
` (2 preceding siblings ...)
2026-03-28 10:17 ` [PATCH net-next 3/4] net: macb: Factor out the handling of non-hot IRQ events into a separate function Kevin Hao
@ 2026-03-28 10:17 ` Kevin Hao
2026-04-01 2:55 ` Jakub Kicinski
2026-04-03 16:17 ` [PATCH net-next 0/4] " Simon Horman
4 siblings, 1 reply; 14+ messages in thread
From: Kevin Hao @ 2026-03-28 10:17 UTC (permalink / raw)
To: Nicolas Ferre, Claudiu Beznea, Andrew Lunn, David S. Miller,
Eric Dumazet, Jakub Kicinski, Paolo Abeni
Cc: Kevin Hao, netdev
In the current implementation, the suspend/resume path frees the
existing IRQ handler and sets up a dedicated WoL IRQ handler, then
restores the original handler upon resume. This approach is not used
by any other Ethernet driver and unnecessarily complicates the
suspend/resume process. After adjusting the IRQ handler in the previous
patches, we can now handle WoL interrupts without introducing any
overhead in the TX/RX hot path. Therefore, the dedicated WoL IRQ
handler is removed.
Signed-off-by: Kevin Hao <haokexin@gmail.com>
---
drivers/net/ethernet/cadence/macb_main.c | 116 ++++++++-----------------------
1 file changed, 29 insertions(+), 87 deletions(-)
diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
index c53b28b42a46489722461957625e8377be63e427..de166dd9e637f5e436274b99f2c5eef99dcdb351 100644
--- a/drivers/net/ethernet/cadence/macb_main.c
+++ b/drivers/net/ethernet/cadence/macb_main.c
@@ -71,7 +71,8 @@ struct sifive_fu540_macb_mgmt {
| MACB_BIT(TXUBR))
#define MACB_INT_MISC_FLAGS (MACB_TX_ERR_FLAGS | MACB_BIT(RXUBR) | \
- MACB_BIT(ISR_ROVR) | MACB_BIT(HRESP))
+ MACB_BIT(ISR_ROVR) | MACB_BIT(HRESP) | \
+ GEM_BIT(WOL) | MACB_BIT(WOL))
/* Max length of transmit frame must be a multiple of 8 bytes */
#define MACB_TX_LEN_ALIGN 8
@@ -2027,62 +2028,32 @@ static void macb_hresp_error_task(struct work_struct *work)
netif_tx_start_all_queues(dev);
}
-static irqreturn_t macb_wol_interrupt(int irq, void *dev_id)
+static void macb_wol_interrupt(struct macb_queue *queue, u32 status)
{
- struct macb_queue *queue = dev_id;
struct macb *bp = queue->bp;
- u32 status;
- status = queue_readl(queue, ISR);
-
- if (unlikely(!status))
- return IRQ_NONE;
-
- spin_lock(&bp->lock);
-
- if (status & MACB_BIT(WOL)) {
- queue_writel(queue, IDR, MACB_BIT(WOL));
- macb_writel(bp, WOL, 0);
- netdev_vdbg(bp->dev, "MACB WoL: queue = %u, isr = 0x%08lx\n",
- (unsigned int)(queue - bp->queues),
- (unsigned long)status);
- if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
- queue_writel(queue, ISR, MACB_BIT(WOL));
- pm_wakeup_event(&bp->pdev->dev, 0);
- }
-
- spin_unlock(&bp->lock);
-
- return IRQ_HANDLED;
+ queue_writel(queue, IDR, MACB_BIT(WOL));
+ macb_writel(bp, WOL, 0);
+ netdev_vdbg(bp->dev, "MACB WoL: queue = %u, isr = 0x%08lx\n",
+ (unsigned int)(queue - bp->queues),
+ (unsigned long)status);
+ if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
+ queue_writel(queue, ISR, MACB_BIT(WOL));
+ pm_wakeup_event(&bp->pdev->dev, 0);
}
-static irqreturn_t gem_wol_interrupt(int irq, void *dev_id)
+static void gem_wol_interrupt(struct macb_queue *queue, u32 status)
{
- struct macb_queue *queue = dev_id;
struct macb *bp = queue->bp;
- u32 status;
- status = queue_readl(queue, ISR);
-
- if (unlikely(!status))
- return IRQ_NONE;
-
- spin_lock(&bp->lock);
-
- if (status & GEM_BIT(WOL)) {
- queue_writel(queue, IDR, GEM_BIT(WOL));
- gem_writel(bp, WOL, 0);
- netdev_vdbg(bp->dev, "GEM WoL: queue = %u, isr = 0x%08lx\n",
- (unsigned int)(queue - bp->queues),
- (unsigned long)status);
- if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
- queue_writel(queue, ISR, GEM_BIT(WOL));
- pm_wakeup_event(&bp->pdev->dev, 0);
- }
-
- spin_unlock(&bp->lock);
-
- return IRQ_HANDLED;
+ queue_writel(queue, IDR, GEM_BIT(WOL));
+ gem_writel(bp, WOL, 0);
+ netdev_vdbg(bp->dev, "GEM WoL: queue = %u, isr = 0x%08lx\n",
+ (unsigned int)(queue - bp->queues),
+ (unsigned long)status);
+ if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
+ queue_writel(queue, ISR, GEM_BIT(WOL));
+ pm_wakeup_event(&bp->pdev->dev, 0);
}
static int macb_interrupt_misc(struct macb_queue *queue, u32 status)
@@ -2147,6 +2118,14 @@ static int macb_interrupt_misc(struct macb_queue *queue, u32 status)
queue_writel(queue, ISR, MACB_BIT(HRESP));
}
+ if (macb_is_gem(bp)) {
+ if (status & GEM_BIT(WOL))
+ gem_wol_interrupt(queue, status);
+ } else {
+ if (status & MACB_BIT(WOL))
+ macb_wol_interrupt(queue, status);
+ }
+
return 0;
}
@@ -5943,7 +5922,6 @@ static int __maybe_unused macb_suspend(struct device *dev)
unsigned long flags;
u32 tmp, ifa_local;
unsigned int q;
- int err;
if (!device_may_wakeup(&bp->dev->dev))
phy_exit(bp->phy);
@@ -6007,39 +5985,15 @@ static int __maybe_unused macb_suspend(struct device *dev)
/* write IP address into register */
tmp |= MACB_BFEXT(IP, ifa_local);
}
- spin_unlock_irqrestore(&bp->lock, flags);
- /* Change interrupt handler and
- * Enable WoL IRQ on queue 0
- */
- devm_free_irq(dev, bp->queues[0].irq, bp->queues);
if (macb_is_gem(bp)) {
- err = devm_request_irq(dev, bp->queues[0].irq, gem_wol_interrupt,
- IRQF_SHARED, netdev->name, bp->queues);
- if (err) {
- dev_err(dev,
- "Unable to request IRQ %d (error %d)\n",
- bp->queues[0].irq, err);
- return err;
- }
- spin_lock_irqsave(&bp->lock, flags);
queue_writel(bp->queues, IER, GEM_BIT(WOL));
gem_writel(bp, WOL, tmp);
- spin_unlock_irqrestore(&bp->lock, flags);
} else {
- err = devm_request_irq(dev, bp->queues[0].irq, macb_wol_interrupt,
- IRQF_SHARED, netdev->name, bp->queues);
- if (err) {
- dev_err(dev,
- "Unable to request IRQ %d (error %d)\n",
- bp->queues[0].irq, err);
- return err;
- }
- spin_lock_irqsave(&bp->lock, flags);
queue_writel(bp->queues, IER, MACB_BIT(WOL));
macb_writel(bp, WOL, tmp);
- spin_unlock_irqrestore(&bp->lock, flags);
}
+ spin_unlock_irqrestore(&bp->lock, flags);
enable_irq_wake(bp->queues[0].irq);
}
@@ -6081,7 +6035,6 @@ static int __maybe_unused macb_resume(struct device *dev)
struct macb_queue *queue;
unsigned long flags;
unsigned int q;
- int err;
if (!device_may_wakeup(&bp->dev->dev))
phy_init(bp->phy);
@@ -6108,17 +6061,6 @@ static int __maybe_unused macb_resume(struct device *dev)
queue_writel(bp->queues, ISR, -1);
spin_unlock_irqrestore(&bp->lock, flags);
- /* Replace interrupt handler on queue 0 */
- devm_free_irq(dev, bp->queues[0].irq, bp->queues);
- err = devm_request_irq(dev, bp->queues[0].irq, macb_interrupt,
- IRQF_SHARED, netdev->name, bp->queues);
- if (err) {
- dev_err(dev,
- "Unable to request IRQ %d (error %d)\n",
- bp->queues[0].irq, err);
- return err;
- }
-
disable_irq_wake(bp->queues[0].irq);
/* Now make sure we disable phy before moving
--
2.53.0
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH net-next 2/4] net: macb: Consolidate MACB_CAPS_ISR_CLEAR_ON_WRITE checks in IRQ handler
2026-03-28 10:17 ` [PATCH net-next 2/4] net: macb: Consolidate MACB_CAPS_ISR_CLEAR_ON_WRITE checks in IRQ handler Kevin Hao
@ 2026-04-01 2:54 ` Jakub Kicinski
2026-04-01 9:30 ` Kevin Hao
0 siblings, 1 reply; 14+ messages in thread
From: Jakub Kicinski @ 2026-04-01 2:54 UTC (permalink / raw)
To: Kevin Hao
Cc: Nicolas Ferre, Claudiu Beznea, Andrew Lunn, David S. Miller,
Eric Dumazet, Paolo Abeni, netdev
On Sat, 28 Mar 2026 18:17:46 +0800 Kevin Hao wrote:
> Currently, the MACB_CAPS_ISR_CLEAR_ON_WRITE flag is checked in every
> branch of the IRQ handler. This repeated evaluation is unnecessary.
> By consolidating the flag check, we eliminate redundant loads of
> bp->caps when TX and RX events occur simultaneously, a common scenario
> under high network throughput. Additionally, this optimization reduces
> the function size from 0x2e8 to 0x2c4.
feels a bit subjective TBH. An alternative improvement would be to
factor out the conditional to a helper:
static void macb_queue_isr_clear(bp, queue, mask)
{
if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
queue_writel(queue, ISR, mask);
}
I'd like an ack one way or the other from someone before merging this
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH net-next 3/4] net: macb: Factor out the handling of non-hot IRQ events into a separate function
2026-03-28 10:17 ` [PATCH net-next 3/4] net: macb: Factor out the handling of non-hot IRQ events into a separate function Kevin Hao
@ 2026-04-01 2:54 ` Jakub Kicinski
2026-04-01 9:31 ` Kevin Hao
0 siblings, 1 reply; 14+ messages in thread
From: Jakub Kicinski @ 2026-04-01 2:54 UTC (permalink / raw)
To: Kevin Hao
Cc: Nicolas Ferre, Claudiu Beznea, Andrew Lunn, David S. Miller,
Eric Dumazet, Paolo Abeni, netdev
On Sat, 28 Mar 2026 18:17:47 +0800 Kevin Hao wrote:
> struct macb_queue *queue = dev_id;
> struct macb *bp = queue->bp;
> struct net_device *dev = bp->dev;
> - u32 status, ctrl;
> + u32 status;
> bool isr_clear;
nit: please try to keep the variable declaration lines sorted longest to
shortest
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH net-next 4/4] net: macb: Remove dedicated IRQ handler for WoL
2026-03-28 10:17 ` [PATCH net-next 4/4] net: macb: Remove dedicated IRQ handler for WoL Kevin Hao
@ 2026-04-01 2:55 ` Jakub Kicinski
2026-04-01 9:32 ` Kevin Hao
0 siblings, 1 reply; 14+ messages in thread
From: Jakub Kicinski @ 2026-04-01 2:55 UTC (permalink / raw)
To: Kevin Hao
Cc: Nicolas Ferre, Claudiu Beznea, Andrew Lunn, David S. Miller,
Eric Dumazet, Paolo Abeni, netdev
On Sat, 28 Mar 2026 18:17:48 +0800 Kevin Hao wrote:
> In the current implementation, the suspend/resume path frees the
> existing IRQ handler and sets up a dedicated WoL IRQ handler, then
> restores the original handler upon resume. This approach is not used
> by any other Ethernet driver and unnecessarily complicates the
> suspend/resume process. After adjusting the IRQ handler in the previous
> patches, we can now handle WoL interrupts without introducing any
> overhead in the TX/RX hot path. Therefore, the dedicated WoL IRQ
> handler is removed.
Couple of sentences on testing (platform + flows) would be great here.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH net-next 2/4] net: macb: Consolidate MACB_CAPS_ISR_CLEAR_ON_WRITE checks in IRQ handler
2026-04-01 2:54 ` Jakub Kicinski
@ 2026-04-01 9:30 ` Kevin Hao
2026-04-01 11:49 ` Nicolai Buchwitz
0 siblings, 1 reply; 14+ messages in thread
From: Kevin Hao @ 2026-04-01 9:30 UTC (permalink / raw)
To: Jakub Kicinski
Cc: Nicolas Ferre, Claudiu Beznea, Andrew Lunn, David S. Miller,
Eric Dumazet, Paolo Abeni, netdev
[-- Attachment #1: Type: text/plain, Size: 2416 bytes --]
> On Sat, 28 Mar 2026 18:17:46 +0800 Kevin Hao wrote:
On Tue, Mar 31, 2026 at 07:54:00PM -0700, Jakub Kicinski wrote:
> > Currently, the MACB_CAPS_ISR_CLEAR_ON_WRITE flag is checked in every
> > branch of the IRQ handler. This repeated evaluation is unnecessary.
> > By consolidating the flag check, we eliminate redundant loads of
> > bp->caps when TX and RX events occur simultaneously, a common scenario
> > under high network throughput. Additionally, this optimization reduces
> > the function size from 0x2e8 to 0x2c4.
>
> feels a bit subjective TBH. An alternative improvement would be to
> factor out the conditional to a helper:
>
> static void macb_queue_isr_clear(bp, queue, mask)
> {
> if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
> queue_writel(queue, ISR, mask);
> }
In addition to the similar pattern in the macb_interrupt() function, there are
seven other instances of this pattern in the macb driver.
$ git grep "if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)" drivers/net/ethernet/cadence/ | wc -l
7
I agree that using a helper function, as you proposed, would reduce the number
of source code lines and improve the readability of the driver. However, such
changes would not affect the final generated code.
The goal of my changes is to reduce both the footprint of macb_interrupt() and
the number of assembly instructions executed within its event loop.
Before the patch, the condition `if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)`
produced the following assembly:
ffff800080d58b1c: 387b6a68 ldrb w8, [x19, x27]
ffff800080d58b20: 360000c8 tbz w8, #0, ffff800080d58b38 <macb_interrupt+0xd0>
After the patch, the condition `if (isr_clear)` results in:
ffff800080d58a4c: 360000d8 tbz w24, #0, ffff800080d58a64 <macb_interrupt+0xac>
Thus, we eliminate two `ldrb` overheads per iteration of the event loop in
macb_interrupt() when both TX and RX events occur simultaneously. This also
reduces the function's footprint by 36 bytes, as the `ldrb` instructions are
omitted in each event branch.
Therefore, your proposed changes and mine serve different purposes.
I acknowledge that my change represents only a very slight optimization for
the interrupt handler. I am also open to your preference for improving source
code readability. Please let me know your decision.
Thanks,
Kevin
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH net-next 3/4] net: macb: Factor out the handling of non-hot IRQ events into a separate function
2026-04-01 2:54 ` Jakub Kicinski
@ 2026-04-01 9:31 ` Kevin Hao
0 siblings, 0 replies; 14+ messages in thread
From: Kevin Hao @ 2026-04-01 9:31 UTC (permalink / raw)
To: Jakub Kicinski
Cc: Nicolas Ferre, Claudiu Beznea, Andrew Lunn, David S. Miller,
Eric Dumazet, Paolo Abeni, netdev
[-- Attachment #1: Type: text/plain, Size: 444 bytes --]
On Tue, Mar 31, 2026 at 07:54:31PM -0700, Jakub Kicinski wrote:
> On Sat, 28 Mar 2026 18:17:47 +0800 Kevin Hao wrote:
> > struct macb_queue *queue = dev_id;
> > struct macb *bp = queue->bp;
> > struct net_device *dev = bp->dev;
> > - u32 status, ctrl;
> > + u32 status;
> > bool isr_clear;
>
> nit: please try to keep the variable declaration lines sorted longest to
> shortest
Will address this in v2.
Thanks,
Kevin
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH net-next 4/4] net: macb: Remove dedicated IRQ handler for WoL
2026-04-01 2:55 ` Jakub Kicinski
@ 2026-04-01 9:32 ` Kevin Hao
0 siblings, 0 replies; 14+ messages in thread
From: Kevin Hao @ 2026-04-01 9:32 UTC (permalink / raw)
To: Jakub Kicinski
Cc: Nicolas Ferre, Claudiu Beznea, Andrew Lunn, David S. Miller,
Eric Dumazet, Paolo Abeni, netdev
[-- Attachment #1: Type: text/plain, Size: 2911 bytes --]
On Tue, Mar 31, 2026 at 07:55:17PM -0700, Jakub Kicinski wrote:
> On Sat, 28 Mar 2026 18:17:48 +0800 Kevin Hao wrote:
> > In the current implementation, the suspend/resume path frees the
> > existing IRQ handler and sets up a dedicated WoL IRQ handler, then
> > restores the original handler upon resume. This approach is not used
> > by any other Ethernet driver and unnecessarily complicates the
> > suspend/resume process. After adjusting the IRQ handler in the previous
> > patches, we can now handle WoL interrupts without introducing any
> > overhead in the TX/RX hot path. Therefore, the dedicated WoL IRQ
> > handler is removed.
>
> Couple of sentences on testing (platform + flows) would be great here.
I have verified WoL functionality on my AMD ZynqMP board using the following
steps. I will include this information in the v2 commit.
root@amd-zynqmp:~# ifconfig end0 192.168.3.3
root@amd-zynqmp:~# ethtool -s end0 wol a
root@amd-zynqmp:~# echo mem >/sys/power/state
PM: suspend entry (deep)
Filesystems sync: 0.055 seconds
Freezing user space processes
Freezing user space processes completed (elapsed 0.006 seconds)
OOM killer disabled.
Freezing remaining freezable tasks
Freezing remaining freezable tasks completed (elapsed 0.004 seconds)
printk: Suspending console(s) (use no_console_suspend to debug)
macb ff0e0000.ethernet: gem-ptp-timer ptp clock unregistered.
e1000e: EEE TX LPI TIMER: 00000000
xuartps ff000000.serial: ttyPS0: Unable to drain transmitter
Disabling non-boot CPUs ...
psci: CPU3 killed (polled 0 ms)
psci: CPU2 killed (polled 0 ms)
psci: CPU1 killed (polled 0 ms)
Enabling non-boot CPUs ...
Detected VIPT I-cache on CPU1
CPU1: Booted secondary processor 0x0000000001 [0x410fd034]
CPU1 is up
Detected VIPT I-cache on CPU2
CPU2: Booted secondary processor 0x0000000002 [0x410fd034]
CPU2 is up
Detected VIPT I-cache on CPU3
CPU3: Booted secondary processor 0x0000000003 [0x410fd034]
CPU3 is up
macb ff0e0000.ethernet end0: Link is Down
macb ff0e0000.ethernet end0: configuring for phy/rgmii-id link mode
macb ff0e0000.ethernet end0: Link is Up - 1Gbps/Full - flow control tx
ptp ptp0: PM: parent end0 should not be sleeping
macb ff0e0000.ethernet: gem-ptp-timer ptp clock registered.
phy phy-fd400000.phy.2: phy_power_on was called before phy_init
ata1: SATA link down (SStatus 0 SControl 330)
ata2: SATA link down (SStatus 0 SControl 330)
e1000e 0000:01:00.0 enp1s0: Hardware Error
OOM killer enabled.
Restarting tasks: Starting
Restarting tasks: Done
random: crng reseeded on system resumption
PM: suspend exit
root@amd-zynqmp:~# e1000e 0000:01:00.0 enp1s0: Hardware Error
root@amd-zynqmp:~# e1000e 0000:01:00.0 enp1s0: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
root@amd-zynqmp:~#
Thanks,
Kevin
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH net-next 2/4] net: macb: Consolidate MACB_CAPS_ISR_CLEAR_ON_WRITE checks in IRQ handler
2026-04-01 9:30 ` Kevin Hao
@ 2026-04-01 11:49 ` Nicolai Buchwitz
2026-04-02 13:44 ` Kevin Hao
0 siblings, 1 reply; 14+ messages in thread
From: Nicolai Buchwitz @ 2026-04-01 11:49 UTC (permalink / raw)
To: Kevin Hao
Cc: Jakub Kicinski, Nicolas Ferre, Claudiu Beznea, Andrew Lunn,
David S. Miller, Eric Dumazet, Paolo Abeni, netdev
On 1.4.2026 11:30, Kevin Hao wrote:
>> On Sat, 28 Mar 2026 18:17:46 +0800 Kevin Hao wrote:
> On Tue, Mar 31, 2026 at 07:54:00PM -0700, Jakub Kicinski wrote:
>> > Currently, the MACB_CAPS_ISR_CLEAR_ON_WRITE flag is checked in every
>> > branch of the IRQ handler. This repeated evaluation is unnecessary.
>> > By consolidating the flag check, we eliminate redundant loads of
>> > bp->caps when TX and RX events occur simultaneously, a common scenario
>> > under high network throughput. Additionally, this optimization reduces
>> > the function size from 0x2e8 to 0x2c4.
>>
>> feels a bit subjective TBH. An alternative improvement would be to
>> factor out the conditional to a helper:
>>
>> static void macb_queue_isr_clear(bp, queue, mask)
>> {
>> if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
>> queue_writel(queue, ISR, mask);
>> }
>
> In addition to the similar pattern in the macb_interrupt() function,
> there are
> seven other instances of this pattern in the macb driver.
> $ git grep "if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)"
> drivers/net/ethernet/cadence/ | wc -l
> 7
>
> I agree that using a helper function, as you proposed, would reduce the
> number
> of source code lines and improve the readability of the driver.
> However, such
> changes would not affect the final generated code.
>
> The goal of my changes is to reduce both the footprint of
> macb_interrupt() and
> the number of assembly instructions executed within its event loop.
>
> Before the patch, the condition `if (bp->caps &
> MACB_CAPS_ISR_CLEAR_ON_WRITE)`
> produced the following assembly:
> ffff800080d58b1c: 387b6a68 ldrb w8, [x19, x27]
> ffff800080d58b20: 360000c8 tbz w8, #0,
> ffff800080d58b38 <macb_interrupt+0xd0>
>
> After the patch, the condition `if (isr_clear)` results in:
> ffff800080d58a4c: 360000d8 tbz w24, #0,
> ffff800080d58a64 <macb_interrupt+0xac>
>
> Thus, we eliminate two `ldrb` overheads per iteration of the event loop
> in
> macb_interrupt() when both TX and RX events occur simultaneously. This
> also
> reduces the function's footprint by 36 bytes, as the `ldrb`
> instructions are
> omitted in each event branch.
>
> Therefore, your proposed changes and mine serve different purposes.
> I acknowledge that my change represents only a very slight optimization
> for
> the interrupt handler. I am also open to your preference for improving
> source
> code readability. Please let me know your decision.
I'm always a fan of optimizations, but I guess in this case the
saved ldrb is negligible next to the MMIO in the same path. We're
talking a single L1 cache hit (~1ns) vs an uncacheable register
write (~100ns+). Patch 3 also re-reads bp->caps in
macb_interrupt_misc() anyway, undoing the local bool for the misc
path.
I'd ack Jakub's helper approach. A macb_queue_isr_clear() would
be consistent across all callsites, including the 7 other
instances you counted outside macb_interrupt().
> Thanks,
> Kevin
Nicolai
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH net-next 2/4] net: macb: Consolidate MACB_CAPS_ISR_CLEAR_ON_WRITE checks in IRQ handler
2026-04-01 11:49 ` Nicolai Buchwitz
@ 2026-04-02 13:44 ` Kevin Hao
0 siblings, 0 replies; 14+ messages in thread
From: Kevin Hao @ 2026-04-02 13:44 UTC (permalink / raw)
To: Nicolai Buchwitz
Cc: Jakub Kicinski, Nicolas Ferre, Claudiu Beznea, Andrew Lunn,
David S. Miller, Eric Dumazet, Paolo Abeni, netdev
[-- Attachment #1: Type: text/plain, Size: 665 bytes --]
On Wed, Apr 01, 2026 at 01:49:28PM +0200, Nicolai Buchwitz wrote:
>
> I'm always a fan of optimizations, but I guess in this case the
> saved ldrb is negligible next to the MMIO in the same path. We're
> talking a single L1 cache hit (~1ns) vs an uncacheable register
> write (~100ns+). Patch 3 also re-reads bp->caps in
> macb_interrupt_misc() anyway, undoing the local bool for the misc
> path.
>
> I'd ack Jakub's helper approach. A macb_queue_isr_clear() would
> be consistent across all callsites, including the 7 other
> instances you counted outside macb_interrupt().
Fair enough. I have used macb_queue_isr_clear() in v2.
Thanks,
Kevin
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH net-next 0/4] net: macb: Remove dedicated IRQ handler for WoL
2026-03-28 10:17 [PATCH net-next 0/4] net: macb: Remove dedicated IRQ handler for WoL Kevin Hao
` (3 preceding siblings ...)
2026-03-28 10:17 ` [PATCH net-next 4/4] net: macb: Remove dedicated IRQ handler for WoL Kevin Hao
@ 2026-04-03 16:17 ` Simon Horman
4 siblings, 0 replies; 14+ messages in thread
From: Simon Horman @ 2026-04-03 16:17 UTC (permalink / raw)
To: Kevin Hao
Cc: Nicolas Ferre, Claudiu Beznea, Andrew Lunn, David S. Miller,
Eric Dumazet, Jakub Kicinski, Paolo Abeni, netdev
On Sat, Mar 28, 2026 at 06:17:44PM +0800, Kevin Hao wrote:
> During debugging of a suspend/resume issue, I observed that the macb driver
> employs a dedicated IRQ handler for Wake-on-LAN (WoL) support. To my knowledge,
> no other Ethernet driver adopts this approach. This implementation unnecessarily
> complicates the suspend/resume process without providing any clear benefit.
> Instead, we can easily modify the existing IRQ handler to manage WoL events,
> avoiding any overhead in the TX/RX hot path.
>
> I am skeptical that the minor optimizations to the IRQ handler proposed in this
> patch series would yield any measurable performance improvement. However, it
> does appear that the execution time of the macb_interrupt() function is
> slightly reduced.
>
> The following data(net throughput and execution time of macb_interrupt) were
> collected from my AMD Zynqmp board using the commands:
> taskset -c 1,2,3 iperf3 -c 192.168.3.4 -t 60 -Z -P 3 -R
> cat /sys/kernel/debug/tracing/trace_stat/function0
...
For the series:
Reviewed-by: Simon Horman <horms@kernel.org>
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2026-04-03 16:17 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-28 10:17 [PATCH net-next 0/4] net: macb: Remove dedicated IRQ handler for WoL Kevin Hao
2026-03-28 10:17 ` [PATCH net-next 1/4] net: macb: Replace open-coded implementation with napi_schedule() Kevin Hao
2026-03-28 10:17 ` [PATCH net-next 2/4] net: macb: Consolidate MACB_CAPS_ISR_CLEAR_ON_WRITE checks in IRQ handler Kevin Hao
2026-04-01 2:54 ` Jakub Kicinski
2026-04-01 9:30 ` Kevin Hao
2026-04-01 11:49 ` Nicolai Buchwitz
2026-04-02 13:44 ` Kevin Hao
2026-03-28 10:17 ` [PATCH net-next 3/4] net: macb: Factor out the handling of non-hot IRQ events into a separate function Kevin Hao
2026-04-01 2:54 ` Jakub Kicinski
2026-04-01 9:31 ` Kevin Hao
2026-03-28 10:17 ` [PATCH net-next 4/4] net: macb: Remove dedicated IRQ handler for WoL Kevin Hao
2026-04-01 2:55 ` Jakub Kicinski
2026-04-01 9:32 ` Kevin Hao
2026-04-03 16:17 ` [PATCH net-next 0/4] " Simon Horman
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox