* [PATCH net v5 0/5] net: macb: various fixes
@ 2025-09-10 16:15 Théo Lebrun
2025-09-10 16:15 ` [PATCH net v5 1/5] dt-bindings: net: cdns,macb: allow tsu_clk without tx_clk Théo Lebrun
` (4 more replies)
0 siblings, 5 replies; 11+ messages in thread
From: Théo Lebrun @ 2025-09-10 16:15 UTC (permalink / raw)
To: Andrew Lunn, David S. Miller, Eric Dumazet, Jakub Kicinski,
Paolo Abeni, Rob Herring, Krzysztof Kozlowski, Conor Dooley,
Nicolas Ferre, Claudiu Beznea, Geert Uytterhoeven, Harini Katakam,
Richard Cochran, Russell King
Cc: netdev, devicetree, linux-kernel, Thomas Petazzoni, Tawfik Bayouk,
Théo Lebrun, Krzysztof Kozlowski, Sean Anderson
Fix a few disparate topics in MACB:
[PATCH net v5 0/5] net: macb: various fixes
[PATCH net v5 1/5] dt-bindings: net: cdns,macb: allow tsu_clk without tx_clk
[PATCH net v5 2/5] net: macb: remove illusion about TBQPH/RBQPH being per-queue
[PATCH net v5 3/5] net: macb: move ring size computation to functions
[PATCH net v5 4/5] net: macb: single dma_alloc_coherent() for DMA descriptors
[PATCH net v5 5/5] net: macb: avoid dealing with endianness in macb_set_hwaddr()
Patch 3/5 is a rework that simplifies patch 4/5. It is the only non-fix.
Pending series on MACB are: (1) many cleanup patches and (2) patches for
EyeQ5 support. Those will be sent targeting net-next/main once this
series lands there, aiming to minimise merge conflicts. Old version of
those patches are visible in the V2 revision [0].
Thanks,
Have a nice day,
Théo
[0]: https://lore.kernel.org/lkml/20250627-macb-v2-0-ff8207d0bb77@bootlin.com/
Signed-off-by: Théo Lebrun <theo.lebrun@bootlin.com>
---
Changes in v5:
- Fix hwaddr endianness patch following comment by Russell [2].
[2]: https://lore.kernel.org/lkml/DCKQTNSCJD5Q.BKVVU59U0MU@bootlin.com/
- Take 4 Acked-by: Nicolas Ferre.
- Take Tested-by: Nicolas Ferre.
- Link to v4: https://lore.kernel.org/r/20250820-macb-fixes-v4-0-23c399429164@bootlin.com
Changes in v4:
- Drop 11 patches that are only cleanups. That includes the
RBOF/skb_reserve() patch that, after discussion with Sean [1], has
had its Fixes trailer dropped. "move ring size computation to
functions" is the only non-fix patch that is kept, as it is depended
upon by further patches. Dropped patches:
dt-bindings: net: cdns,macb: sort compatibles
net: macb: match skb_reserve(skb, NET_IP_ALIGN) with HW alignment
net: macb: use BIT() macro for capability definitions
net: macb: remove gap in MACB_CAPS_* flags
net: macb: Remove local variables clk_init and init in macb_probe()
net: macb: drop macb_config NULL checking
net: macb: simplify macb_dma_desc_get_size()
net: macb: simplify macb_adj_dma_desc_idx()
net: macb: move bp->hw_dma_cap flags to bp->caps
net: macb: introduce DMA descriptor helpers (is 64bit? is PTP?)
net: macb: sort #includes
[1]: https://lore.kernel.org/lkml/d4bead1c-697a-46d8-ba9c-64292fccb19f@linux.dev/
- Wrap code to 80 chars.
- Link to v3: https://lore.kernel.org/r/20250808-macb-fixes-v3-0-08f1fcb5179f@bootlin.com
Changes in v3:
- Cover letter: drop addresses that reject emails:
cyrille.pitchen@atmel.com
hskinnemoen@atmel.com
jeff@garzik.org
rafalo@cadence.com
- dt-bindings: Take 2x Reviewed-by Krzysztof.
- dt-bindings: add Fixes trailer to "allow tsu_clk without tx_clk"
patch, to highlight we are not introducing new behavior.
- Reorder commits; move fixes first followed by cleanup patches.
- Drop all EyeQ5 related commits.
- New commit: "remove gap in MACB_CAPS_* flags".
- New commit: "move ring size computation to functions".
- New commit: "move bp->hw_dma_cap flags to bp->caps".
- Rename introduced helpers macb_dma_is_64b() to macb_dma64() and,
macb_dma_is_ptp() to macb_dma_ptp().
- Rename MACB_CAPS_RSC_CAPABLE -> MACB_CAPS_RSC.
- Fix commit message typos: "maxime" -> "maximise", etc.
- Take 7x Reviewed-by: Sean Anderson.
- Add details to some commit messages.
- Link to v2: https://lore.kernel.org/r/20250627-macb-v2-0-ff8207d0bb77@bootlin.com
---
Théo Lebrun (5):
dt-bindings: net: cdns,macb: allow tsu_clk without tx_clk
net: macb: remove illusion about TBQPH/RBQPH being per-queue
net: macb: move ring size computation to functions
net: macb: single dma_alloc_coherent() for DMA descriptors
net: macb: avoid dealing with endianness in macb_set_hwaddr()
.../devicetree/bindings/net/cdns,macb.yaml | 2 +-
drivers/net/ethernet/cadence/macb.h | 4 -
drivers/net/ethernet/cadence/macb_main.c | 140 ++++++++++-----------
3 files changed, 69 insertions(+), 77 deletions(-)
---
base-commit: 03605e0fae3948824b613bfb31bcf420b89c89c7
change-id: 20250808-macb-fixes-e2f570e11241
Best regards,
--
Théo Lebrun <theo.lebrun@bootlin.com>
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH net v5 1/5] dt-bindings: net: cdns,macb: allow tsu_clk without tx_clk
2025-09-10 16:15 [PATCH net v5 0/5] net: macb: various fixes Théo Lebrun
@ 2025-09-10 16:15 ` Théo Lebrun
2025-09-10 16:15 ` [PATCH net v5 2/5] net: macb: remove illusion about TBQPH/RBQPH being per-queue Théo Lebrun
` (3 subsequent siblings)
4 siblings, 0 replies; 11+ messages in thread
From: Théo Lebrun @ 2025-09-10 16:15 UTC (permalink / raw)
To: Andrew Lunn, David S. Miller, Eric Dumazet, Jakub Kicinski,
Paolo Abeni, Rob Herring, Krzysztof Kozlowski, Conor Dooley,
Nicolas Ferre, Claudiu Beznea, Geert Uytterhoeven, Harini Katakam,
Richard Cochran, Russell King
Cc: netdev, devicetree, linux-kernel, Thomas Petazzoni, Tawfik Bayouk,
Théo Lebrun, Krzysztof Kozlowski
Allow providing tsu_clk without a tx_clk as both are optional.
This is about relaxing unneeded constraints. It so happened that in the
past HW that needed a tsu_clk always needed a tx_clk.
Fixes: 4e5b6de1f46d ("dt-bindings: net: cdns,macb: Convert to json-schema")
Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
Acked-by: Nicolas Ferre <nicolas.ferre@microchip.com>
Signed-off-by: Théo Lebrun <theo.lebrun@bootlin.com>
---
Documentation/devicetree/bindings/net/cdns,macb.yaml | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/Documentation/devicetree/bindings/net/cdns,macb.yaml b/Documentation/devicetree/bindings/net/cdns,macb.yaml
index 559d0f733e7e7ac2909b87ab759be51d59be51c2..6e20d67e7628cd9dcef6e430b2a49eeedd0991a7 100644
--- a/Documentation/devicetree/bindings/net/cdns,macb.yaml
+++ b/Documentation/devicetree/bindings/net/cdns,macb.yaml
@@ -85,7 +85,7 @@ properties:
items:
- enum: [ ether_clk, hclk, pclk ]
- enum: [ hclk, pclk ]
- - const: tx_clk
+ - enum: [ tx_clk, tsu_clk ]
- enum: [ rx_clk, tsu_clk ]
- const: tsu_clk
--
2.51.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH net v5 2/5] net: macb: remove illusion about TBQPH/RBQPH being per-queue
2025-09-10 16:15 [PATCH net v5 0/5] net: macb: various fixes Théo Lebrun
2025-09-10 16:15 ` [PATCH net v5 1/5] dt-bindings: net: cdns,macb: allow tsu_clk without tx_clk Théo Lebrun
@ 2025-09-10 16:15 ` Théo Lebrun
2025-09-10 16:15 ` [PATCH net v5 3/5] net: macb: move ring size computation to functions Théo Lebrun
` (2 subsequent siblings)
4 siblings, 0 replies; 11+ messages in thread
From: Théo Lebrun @ 2025-09-10 16:15 UTC (permalink / raw)
To: Andrew Lunn, David S. Miller, Eric Dumazet, Jakub Kicinski,
Paolo Abeni, Rob Herring, Krzysztof Kozlowski, Conor Dooley,
Nicolas Ferre, Claudiu Beznea, Geert Uytterhoeven, Harini Katakam,
Richard Cochran, Russell King
Cc: netdev, devicetree, linux-kernel, Thomas Petazzoni, Tawfik Bayouk,
Théo Lebrun, Sean Anderson
The MACB driver acts as if TBQPH/RBQPH are configurable on a per queue
basis; this is a lie. A single register configures the upper 32 bits of
each DMA descriptor buffers for all queues.
Concrete actions:
- Drop GEM_TBQPH/GEM_RBQPH macros which have a queue index argument.
Only use MACB_TBQPH/MACB_RBQPH constants.
- Drop struct macb_queue->TBQPH/RBQPH fields.
- In macb_init_buffers(): do a single write to TBQPH and RBQPH for all
queues instead of a write per queue.
- In macb_tx_error_task(): drop the write to TBQPH.
- In macb_alloc_consistent(): if allocations give different upper
32-bits, fail. Previously, it would have lead to silent memory
corruption as queues would have used the upper 32 bits of the alloc
from queue 0 and their own low 32 bits.
- In macb_suspend(): if we use the tie off descriptor for suspend, do
the write once for all queues instead of once per queue.
Fixes: fff8019a08b6 ("net: macb: Add 64 bit addressing support for GEM")
Fixes: ae1f2a56d273 ("net: macb: Added support for many RX queues")
Reviewed-by: Sean Anderson <sean.anderson@linux.dev>
Acked-by: Nicolas Ferre <nicolas.ferre@microchip.com>
Signed-off-by: Théo Lebrun <theo.lebrun@bootlin.com>
---
drivers/net/ethernet/cadence/macb.h | 4 ---
drivers/net/ethernet/cadence/macb_main.c | 57 ++++++++++++++------------------
2 files changed, 24 insertions(+), 37 deletions(-)
diff --git a/drivers/net/ethernet/cadence/macb.h b/drivers/net/ethernet/cadence/macb.h
index c9a5c8beb2fa8166195d1d83f187d2d0c62668a8..a7e845fee4b3a2e3d14abb49abdbaf3e8e6ea02b 100644
--- a/drivers/net/ethernet/cadence/macb.h
+++ b/drivers/net/ethernet/cadence/macb.h
@@ -213,10 +213,8 @@
#define GEM_ISR(hw_q) (0x0400 + ((hw_q) << 2))
#define GEM_TBQP(hw_q) (0x0440 + ((hw_q) << 2))
-#define GEM_TBQPH(hw_q) (0x04C8)
#define GEM_RBQP(hw_q) (0x0480 + ((hw_q) << 2))
#define GEM_RBQS(hw_q) (0x04A0 + ((hw_q) << 2))
-#define GEM_RBQPH(hw_q) (0x04D4)
#define GEM_IER(hw_q) (0x0600 + ((hw_q) << 2))
#define GEM_IDR(hw_q) (0x0620 + ((hw_q) << 2))
#define GEM_IMR(hw_q) (0x0640 + ((hw_q) << 2))
@@ -1214,10 +1212,8 @@ struct macb_queue {
unsigned int IDR;
unsigned int IMR;
unsigned int TBQP;
- unsigned int TBQPH;
unsigned int RBQS;
unsigned int RBQP;
- unsigned int RBQPH;
/* Lock to protect tx_head and tx_tail */
spinlock_t tx_ptr_lock;
diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
index c769b7dbd3baf5cafe64008e18dff939623528d4..3e634049dadf14d371eac68448f80b111f228dfd 100644
--- a/drivers/net/ethernet/cadence/macb_main.c
+++ b/drivers/net/ethernet/cadence/macb_main.c
@@ -495,19 +495,19 @@ static void macb_init_buffers(struct macb *bp)
struct macb_queue *queue;
unsigned int q;
+#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
+ /* Single register for all queues' high 32 bits. */
+ if (bp->hw_dma_cap & HW_DMA_CAP_64B) {
+ macb_writel(bp, RBQPH,
+ upper_32_bits(bp->queues[0].rx_ring_dma));
+ macb_writel(bp, TBQPH,
+ upper_32_bits(bp->queues[0].tx_ring_dma));
+ }
+#endif
+
for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) {
queue_writel(queue, RBQP, lower_32_bits(queue->rx_ring_dma));
-#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
- if (bp->hw_dma_cap & HW_DMA_CAP_64B)
- queue_writel(queue, RBQPH,
- upper_32_bits(queue->rx_ring_dma));
-#endif
queue_writel(queue, TBQP, lower_32_bits(queue->tx_ring_dma));
-#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
- if (bp->hw_dma_cap & HW_DMA_CAP_64B)
- queue_writel(queue, TBQPH,
- upper_32_bits(queue->tx_ring_dma));
-#endif
}
}
@@ -1166,10 +1166,6 @@ static void macb_tx_error_task(struct work_struct *work)
/* Reinitialize the TX desc queue */
queue_writel(queue, TBQP, lower_32_bits(queue->tx_ring_dma));
-#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
- if (bp->hw_dma_cap & HW_DMA_CAP_64B)
- queue_writel(queue, TBQPH, upper_32_bits(queue->tx_ring_dma));
-#endif
/* Make TX ring reflect state of hardware */
queue->tx_head = 0;
queue->tx_tail = 0;
@@ -2546,6 +2542,7 @@ static int macb_alloc_consistent(struct macb *bp)
{
struct macb_queue *queue;
unsigned int q;
+ u32 upper;
int size;
for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) {
@@ -2553,7 +2550,9 @@ static int macb_alloc_consistent(struct macb *bp)
queue->tx_ring = dma_alloc_coherent(&bp->pdev->dev, size,
&queue->tx_ring_dma,
GFP_KERNEL);
- if (!queue->tx_ring)
+ upper = upper_32_bits(queue->tx_ring_dma);
+ if (!queue->tx_ring ||
+ upper != upper_32_bits(bp->queues[0].tx_ring_dma))
goto out_err;
netdev_dbg(bp->dev,
"Allocated TX ring for queue %u of %d bytes at %08lx (mapped %p)\n",
@@ -2567,8 +2566,11 @@ static int macb_alloc_consistent(struct macb *bp)
size = RX_RING_BYTES(bp) + bp->rx_bd_rd_prefetch;
queue->rx_ring = dma_alloc_coherent(&bp->pdev->dev, size,
- &queue->rx_ring_dma, GFP_KERNEL);
- if (!queue->rx_ring)
+ &queue->rx_ring_dma,
+ GFP_KERNEL);
+ upper = upper_32_bits(queue->rx_ring_dma);
+ if (!queue->rx_ring ||
+ upper != upper_32_bits(bp->queues[0].rx_ring_dma))
goto out_err;
netdev_dbg(bp->dev,
"Allocated RX ring of %d bytes at %08lx (mapped %p)\n",
@@ -4309,12 +4311,6 @@ static int macb_init(struct platform_device *pdev)
queue->TBQP = GEM_TBQP(hw_q - 1);
queue->RBQP = GEM_RBQP(hw_q - 1);
queue->RBQS = GEM_RBQS(hw_q - 1);
-#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
- if (bp->hw_dma_cap & HW_DMA_CAP_64B) {
- queue->TBQPH = GEM_TBQPH(hw_q - 1);
- queue->RBQPH = GEM_RBQPH(hw_q - 1);
- }
-#endif
} else {
/* queue0 uses legacy registers */
queue->ISR = MACB_ISR;
@@ -4323,12 +4319,6 @@ static int macb_init(struct platform_device *pdev)
queue->IMR = MACB_IMR;
queue->TBQP = MACB_TBQP;
queue->RBQP = MACB_RBQP;
-#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
- if (bp->hw_dma_cap & HW_DMA_CAP_64B) {
- queue->TBQPH = MACB_TBQPH;
- queue->RBQPH = MACB_RBQPH;
- }
-#endif
}
/* get irq: here we use the linux queue index, not the hardware
@@ -5452,6 +5442,11 @@ static int __maybe_unused macb_suspend(struct device *dev)
*/
tmp = macb_readl(bp, NCR);
macb_writel(bp, NCR, tmp & ~(MACB_BIT(TE) | MACB_BIT(RE)));
+#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
+ if (!(bp->caps & MACB_CAPS_QUEUE_DISABLE))
+ macb_writel(bp, RBQPH,
+ upper_32_bits(bp->rx_ring_tieoff_dma));
+#endif
for (q = 0, queue = bp->queues; q < bp->num_queues;
++q, ++queue) {
/* Disable RX queues */
@@ -5461,10 +5456,6 @@ static int __maybe_unused macb_suspend(struct device *dev)
/* Tie off RX queues */
queue_writel(queue, RBQP,
lower_32_bits(bp->rx_ring_tieoff_dma));
-#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
- queue_writel(queue, RBQPH,
- upper_32_bits(bp->rx_ring_tieoff_dma));
-#endif
}
/* Disable all interrupts */
queue_writel(queue, IDR, -1);
--
2.51.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH net v5 3/5] net: macb: move ring size computation to functions
2025-09-10 16:15 [PATCH net v5 0/5] net: macb: various fixes Théo Lebrun
2025-09-10 16:15 ` [PATCH net v5 1/5] dt-bindings: net: cdns,macb: allow tsu_clk without tx_clk Théo Lebrun
2025-09-10 16:15 ` [PATCH net v5 2/5] net: macb: remove illusion about TBQPH/RBQPH being per-queue Théo Lebrun
@ 2025-09-10 16:15 ` Théo Lebrun
2025-09-11 6:43 ` Karumanchi, Vineeth
2025-09-10 16:15 ` [PATCH net v5 4/5] net: macb: single dma_alloc_coherent() for DMA descriptors Théo Lebrun
2025-09-10 16:15 ` [PATCH net v5 5/5] net: macb: avoid dealing with endianness in macb_set_hwaddr() Théo Lebrun
4 siblings, 1 reply; 11+ messages in thread
From: Théo Lebrun @ 2025-09-10 16:15 UTC (permalink / raw)
To: Andrew Lunn, David S. Miller, Eric Dumazet, Jakub Kicinski,
Paolo Abeni, Rob Herring, Krzysztof Kozlowski, Conor Dooley,
Nicolas Ferre, Claudiu Beznea, Geert Uytterhoeven, Harini Katakam,
Richard Cochran, Russell King
Cc: netdev, devicetree, linux-kernel, Thomas Petazzoni, Tawfik Bayouk,
Théo Lebrun
The tx/rx ring size calculation is somewhat complex and partially hidden
behind a macro. Move that out of the {RX,TX}_RING_BYTES() macros and
macb_{alloc,free}_consistent() functions into neat separate functions.
In macb_free_consistent(), we drop the size variable and directly call
the size helpers in the arguments list. In macb_alloc_consistent(), we
keep the size variable that is used by netdev_dbg() calls.
Acked-by: Nicolas Ferre <nicolas.ferre@microchip.com>
Signed-off-by: Théo Lebrun <theo.lebrun@bootlin.com>
---
drivers/net/ethernet/cadence/macb_main.c | 27 ++++++++++++++++-----------
1 file changed, 16 insertions(+), 11 deletions(-)
diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
index 3e634049dadf14d371eac68448f80b111f228dfd..73840808ea801b35a64a296dedc3a91e6e1f9f51 100644
--- a/drivers/net/ethernet/cadence/macb_main.c
+++ b/drivers/net/ethernet/cadence/macb_main.c
@@ -51,14 +51,10 @@ struct sifive_fu540_macb_mgmt {
#define DEFAULT_RX_RING_SIZE 512 /* must be power of 2 */
#define MIN_RX_RING_SIZE 64
#define MAX_RX_RING_SIZE 8192
-#define RX_RING_BYTES(bp) (macb_dma_desc_get_size(bp) \
- * (bp)->rx_ring_size)
#define DEFAULT_TX_RING_SIZE 512 /* must be power of 2 */
#define MIN_TX_RING_SIZE 64
#define MAX_TX_RING_SIZE 4096
-#define TX_RING_BYTES(bp) (macb_dma_desc_get_size(bp) \
- * (bp)->tx_ring_size)
/* level of occupied TX descriptors under which we wake up TX process */
#define MACB_TX_WAKEUP_THRESH(bp) (3 * (bp)->tx_ring_size / 4)
@@ -2470,11 +2466,20 @@ static void macb_free_rx_buffers(struct macb *bp)
}
}
+static unsigned int macb_tx_ring_size_per_queue(struct macb *bp)
+{
+ return macb_dma_desc_get_size(bp) * bp->tx_ring_size + bp->tx_bd_rd_prefetch;
+}
+
+static unsigned int macb_rx_ring_size_per_queue(struct macb *bp)
+{
+ return macb_dma_desc_get_size(bp) * bp->rx_ring_size + bp->rx_bd_rd_prefetch;
+}
+
static void macb_free_consistent(struct macb *bp)
{
struct macb_queue *queue;
unsigned int q;
- int size;
if (bp->rx_ring_tieoff) {
dma_free_coherent(&bp->pdev->dev, macb_dma_desc_get_size(bp),
@@ -2488,14 +2493,14 @@ static void macb_free_consistent(struct macb *bp)
kfree(queue->tx_skb);
queue->tx_skb = NULL;
if (queue->tx_ring) {
- size = TX_RING_BYTES(bp) + bp->tx_bd_rd_prefetch;
- dma_free_coherent(&bp->pdev->dev, size,
+ dma_free_coherent(&bp->pdev->dev,
+ macb_tx_ring_size_per_queue(bp),
queue->tx_ring, queue->tx_ring_dma);
queue->tx_ring = NULL;
}
if (queue->rx_ring) {
- size = RX_RING_BYTES(bp) + bp->rx_bd_rd_prefetch;
- dma_free_coherent(&bp->pdev->dev, size,
+ dma_free_coherent(&bp->pdev->dev,
+ macb_rx_ring_size_per_queue(bp),
queue->rx_ring, queue->rx_ring_dma);
queue->rx_ring = NULL;
}
@@ -2546,7 +2551,7 @@ static int macb_alloc_consistent(struct macb *bp)
int size;
for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) {
- size = TX_RING_BYTES(bp) + bp->tx_bd_rd_prefetch;
+ size = macb_tx_ring_size_per_queue(bp);
queue->tx_ring = dma_alloc_coherent(&bp->pdev->dev, size,
&queue->tx_ring_dma,
GFP_KERNEL);
@@ -2564,7 +2569,7 @@ static int macb_alloc_consistent(struct macb *bp)
if (!queue->tx_skb)
goto out_err;
- size = RX_RING_BYTES(bp) + bp->rx_bd_rd_prefetch;
+ size = macb_rx_ring_size_per_queue(bp);
queue->rx_ring = dma_alloc_coherent(&bp->pdev->dev, size,
&queue->rx_ring_dma,
GFP_KERNEL);
--
2.51.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH net v5 4/5] net: macb: single dma_alloc_coherent() for DMA descriptors
2025-09-10 16:15 [PATCH net v5 0/5] net: macb: various fixes Théo Lebrun
` (2 preceding siblings ...)
2025-09-10 16:15 ` [PATCH net v5 3/5] net: macb: move ring size computation to functions Théo Lebrun
@ 2025-09-10 16:15 ` Théo Lebrun
2025-09-10 16:15 ` [PATCH net v5 5/5] net: macb: avoid dealing with endianness in macb_set_hwaddr() Théo Lebrun
4 siblings, 0 replies; 11+ messages in thread
From: Théo Lebrun @ 2025-09-10 16:15 UTC (permalink / raw)
To: Andrew Lunn, David S. Miller, Eric Dumazet, Jakub Kicinski,
Paolo Abeni, Rob Herring, Krzysztof Kozlowski, Conor Dooley,
Nicolas Ferre, Claudiu Beznea, Geert Uytterhoeven, Harini Katakam,
Richard Cochran, Russell King
Cc: netdev, devicetree, linux-kernel, Thomas Petazzoni, Tawfik Bayouk,
Théo Lebrun, Sean Anderson
Move from 2*NUM_QUEUES dma_alloc_coherent() for DMA descriptor rings to
2 calls overall.
Issue is with how all queues share the same register for configuring the
upper 32-bits of Tx/Rx descriptor rings. Taking Tx, notice how TBQPH
does *not* depend on the queue index:
#define GEM_TBQP(hw_q) (0x0440 + ((hw_q) << 2))
#define GEM_TBQPH(hw_q) (0x04C8)
queue_writel(queue, TBQP, lower_32_bits(queue->tx_ring_dma));
#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
if (bp->hw_dma_cap & HW_DMA_CAP_64B)
queue_writel(queue, TBQPH, upper_32_bits(queue->tx_ring_dma));
#endif
To maximise our chances of getting valid DMA addresses, we do a single
dma_alloc_coherent() across queues. This improves the odds because
alloc_pages() guarantees natural alignment. Other codepaths (IOMMU or
dev/arch dma_map_ops) don't give high enough guarantees
(even page-aligned isn't enough).
Two consideration:
- dma_alloc_coherent() gives us page alignment. Here we remove this
constraint meaning each queue's ring won't be page-aligned anymore.
- This can save some tiny amounts of memory. Fewer allocations means
(1) less overhead (constant cost per alloc) and (2) less wasted bytes
due to alignment constraints.
Example for (2): 4 queues, default ring size (512), 64-bit DMA
descriptors, 16K pages:
- Before: 8 allocs of 8K, each rounded to 16K => 64K wasted.
- After: 2 allocs of 32K => 0K wasted.
Fixes: 02c958dd3446 ("net/macb: add TX multiqueue support for gem")
Reviewed-by: Sean Anderson <sean.anderson@linux.dev>
Acked-by: Nicolas Ferre <nicolas.ferre@microchip.com>
Tested-by: Nicolas Ferre <nicolas.ferre@microchip.com> # on sam9x75
Signed-off-by: Théo Lebrun <theo.lebrun@bootlin.com>
---
drivers/net/ethernet/cadence/macb_main.c | 80 ++++++++++++++++----------------
1 file changed, 41 insertions(+), 39 deletions(-)
diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
index 73840808ea801b35a64a296dedc3a91e6e1f9f51..fc082a7a5a313be3d58a008533c3815cb1b1639a 100644
--- a/drivers/net/ethernet/cadence/macb_main.c
+++ b/drivers/net/ethernet/cadence/macb_main.c
@@ -2478,32 +2478,30 @@ static unsigned int macb_rx_ring_size_per_queue(struct macb *bp)
static void macb_free_consistent(struct macb *bp)
{
+ struct device *dev = &bp->pdev->dev;
struct macb_queue *queue;
unsigned int q;
+ size_t size;
if (bp->rx_ring_tieoff) {
- dma_free_coherent(&bp->pdev->dev, macb_dma_desc_get_size(bp),
+ dma_free_coherent(dev, macb_dma_desc_get_size(bp),
bp->rx_ring_tieoff, bp->rx_ring_tieoff_dma);
bp->rx_ring_tieoff = NULL;
}
bp->macbgem_ops.mog_free_rx_buffers(bp);
+ size = bp->num_queues * macb_tx_ring_size_per_queue(bp);
+ dma_free_coherent(dev, size, bp->queues[0].tx_ring, bp->queues[0].tx_ring_dma);
+
+ size = bp->num_queues * macb_rx_ring_size_per_queue(bp);
+ dma_free_coherent(dev, size, bp->queues[0].rx_ring, bp->queues[0].rx_ring_dma);
+
for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) {
kfree(queue->tx_skb);
queue->tx_skb = NULL;
- if (queue->tx_ring) {
- dma_free_coherent(&bp->pdev->dev,
- macb_tx_ring_size_per_queue(bp),
- queue->tx_ring, queue->tx_ring_dma);
- queue->tx_ring = NULL;
- }
- if (queue->rx_ring) {
- dma_free_coherent(&bp->pdev->dev,
- macb_rx_ring_size_per_queue(bp),
- queue->rx_ring, queue->rx_ring_dma);
- queue->rx_ring = NULL;
- }
+ queue->tx_ring = NULL;
+ queue->rx_ring = NULL;
}
}
@@ -2545,41 +2543,45 @@ static int macb_alloc_rx_buffers(struct macb *bp)
static int macb_alloc_consistent(struct macb *bp)
{
+ struct device *dev = &bp->pdev->dev;
+ dma_addr_t tx_dma, rx_dma;
struct macb_queue *queue;
unsigned int q;
- u32 upper;
- int size;
+ void *tx, *rx;
+ size_t size;
+
+ /*
+ * Upper 32-bits of Tx/Rx DMA descriptor for each queues much match!
+ * We cannot enforce this guarantee, the best we can do is do a single
+ * allocation and hope it will land into alloc_pages() that guarantees
+ * natural alignment of physical addresses.
+ */
+
+ size = bp->num_queues * macb_tx_ring_size_per_queue(bp);
+ tx = dma_alloc_coherent(dev, size, &tx_dma, GFP_KERNEL);
+ if (!tx || upper_32_bits(tx_dma) != upper_32_bits(tx_dma + size - 1))
+ goto out_err;
+ netdev_dbg(bp->dev, "Allocated %zu bytes for %u TX rings at %08lx (mapped %p)\n",
+ size, bp->num_queues, (unsigned long)tx_dma, tx);
+
+ size = bp->num_queues * macb_rx_ring_size_per_queue(bp);
+ rx = dma_alloc_coherent(dev, size, &rx_dma, GFP_KERNEL);
+ if (!rx || upper_32_bits(rx_dma) != upper_32_bits(rx_dma + size - 1))
+ goto out_err;
+ netdev_dbg(bp->dev, "Allocated %zu bytes for %u RX rings at %08lx (mapped %p)\n",
+ size, bp->num_queues, (unsigned long)rx_dma, rx);
for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) {
- size = macb_tx_ring_size_per_queue(bp);
- queue->tx_ring = dma_alloc_coherent(&bp->pdev->dev, size,
- &queue->tx_ring_dma,
- GFP_KERNEL);
- upper = upper_32_bits(queue->tx_ring_dma);
- if (!queue->tx_ring ||
- upper != upper_32_bits(bp->queues[0].tx_ring_dma))
- goto out_err;
- netdev_dbg(bp->dev,
- "Allocated TX ring for queue %u of %d bytes at %08lx (mapped %p)\n",
- q, size, (unsigned long)queue->tx_ring_dma,
- queue->tx_ring);
+ queue->tx_ring = tx + macb_tx_ring_size_per_queue(bp) * q;
+ queue->tx_ring_dma = tx_dma + macb_tx_ring_size_per_queue(bp) * q;
+
+ queue->rx_ring = rx + macb_rx_ring_size_per_queue(bp) * q;
+ queue->rx_ring_dma = rx_dma + macb_rx_ring_size_per_queue(bp) * q;
size = bp->tx_ring_size * sizeof(struct macb_tx_skb);
queue->tx_skb = kmalloc(size, GFP_KERNEL);
if (!queue->tx_skb)
goto out_err;
-
- size = macb_rx_ring_size_per_queue(bp);
- queue->rx_ring = dma_alloc_coherent(&bp->pdev->dev, size,
- &queue->rx_ring_dma,
- GFP_KERNEL);
- upper = upper_32_bits(queue->rx_ring_dma);
- if (!queue->rx_ring ||
- upper != upper_32_bits(bp->queues[0].rx_ring_dma))
- goto out_err;
- netdev_dbg(bp->dev,
- "Allocated RX ring of %d bytes at %08lx (mapped %p)\n",
- size, (unsigned long)queue->rx_ring_dma, queue->rx_ring);
}
if (bp->macbgem_ops.mog_alloc_rx_buffers(bp))
goto out_err;
--
2.51.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH net v5 5/5] net: macb: avoid dealing with endianness in macb_set_hwaddr()
2025-09-10 16:15 [PATCH net v5 0/5] net: macb: various fixes Théo Lebrun
` (3 preceding siblings ...)
2025-09-10 16:15 ` [PATCH net v5 4/5] net: macb: single dma_alloc_coherent() for DMA descriptors Théo Lebrun
@ 2025-09-10 16:15 ` Théo Lebrun
2025-09-11 3:13 ` Karumanchi, Vineeth
4 siblings, 1 reply; 11+ messages in thread
From: Théo Lebrun @ 2025-09-10 16:15 UTC (permalink / raw)
To: Andrew Lunn, David S. Miller, Eric Dumazet, Jakub Kicinski,
Paolo Abeni, Rob Herring, Krzysztof Kozlowski, Conor Dooley,
Nicolas Ferre, Claudiu Beznea, Geert Uytterhoeven, Harini Katakam,
Richard Cochran, Russell King
Cc: netdev, devicetree, linux-kernel, Thomas Petazzoni, Tawfik Bayouk,
Théo Lebrun, Sean Anderson
bp->dev->dev_addr is of type `unsigned char *`. Casting it to a u32
pointer and dereferencing implies dealing manually with endianness,
which is error-prone.
Replace by calls to get_unaligned_le32|le16() helpers.
This was found using sparse:
⟩ make C=2 drivers/net/ethernet/cadence/macb_main.o
warning: incorrect type in assignment (different base types)
expected unsigned int [usertype] bottom
got restricted __le32 [usertype]
warning: incorrect type in assignment (different base types)
expected unsigned short [usertype] top
got restricted __le16 [usertype]
...
Reviewed-by: Sean Anderson <sean.anderson@linux.dev>
Signed-off-by: Théo Lebrun <theo.lebrun@bootlin.com>
---
drivers/net/ethernet/cadence/macb_main.c | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
index fc082a7a5a313be3d58a008533c3815cb1b1639a..c16d60048185b4cb473ddfcf4633fa2f6dea20cc 100644
--- a/drivers/net/ethernet/cadence/macb_main.c
+++ b/drivers/net/ethernet/cadence/macb_main.c
@@ -271,12 +271,10 @@ static bool hw_is_gem(void __iomem *addr, bool native_io)
static void macb_set_hwaddr(struct macb *bp)
{
- u32 bottom;
- u16 top;
+ u32 bottom = get_unaligned_le32(bp->dev->dev_addr);
+ u16 top = get_unaligned_le16(bp->dev->dev_addr + 4);
- bottom = cpu_to_le32(*((u32 *)bp->dev->dev_addr));
macb_or_gem_writel(bp, SA1B, bottom);
- top = cpu_to_le16(*((u16 *)(bp->dev->dev_addr + 4)));
macb_or_gem_writel(bp, SA1T, top);
if (gem_has_ptp(bp)) {
--
2.51.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH net v5 5/5] net: macb: avoid dealing with endianness in macb_set_hwaddr()
2025-09-10 16:15 ` [PATCH net v5 5/5] net: macb: avoid dealing with endianness in macb_set_hwaddr() Théo Lebrun
@ 2025-09-11 3:13 ` Karumanchi, Vineeth
2025-09-11 9:22 ` Théo Lebrun
0 siblings, 1 reply; 11+ messages in thread
From: Karumanchi, Vineeth @ 2025-09-11 3:13 UTC (permalink / raw)
To: Théo Lebrun, Andrew Lunn, David S. Miller, Eric Dumazet,
Jakub Kicinski, Paolo Abeni, Rob Herring, Krzysztof Kozlowski,
Conor Dooley, Nicolas Ferre, Claudiu Beznea, Geert Uytterhoeven,
Harini Katakam, Richard Cochran, Russell King
Cc: netdev, devicetree, linux-kernel, Thomas Petazzoni, Tawfik Bayouk,
Sean Anderson
Hi Theo,
On 9/10/2025 9:45 PM, Théo Lebrun wrote:
> bp->dev->dev_addr is of type `unsigned char *`. Casting it to a u32
> pointer and dereferencing implies dealing manually with endianness,
> which is error-prone.
>
> Replace by calls to get_unaligned_le32|le16() helpers.
>
> This was found using sparse:
> ⟩ make C=2 drivers/net/ethernet/cadence/macb_main.o
> warning: incorrect type in assignment (different base types)
> expected unsigned int [usertype] bottom
> got restricted __le32 [usertype]
> warning: incorrect type in assignment (different base types)
> expected unsigned short [usertype] top
> got restricted __le16 [usertype]
> ...
>
> Reviewed-by: Sean Anderson <sean.anderson@linux.dev>
> Signed-off-by: Théo Lebrun <theo.lebrun@bootlin.com>
> ---
> drivers/net/ethernet/cadence/macb_main.c | 6 ++----
> 1 file changed, 2 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
> index fc082a7a5a313be3d58a008533c3815cb1b1639a..c16d60048185b4cb473ddfcf4633fa2f6dea20cc 100644
> --- a/drivers/net/ethernet/cadence/macb_main.c
> +++ b/drivers/net/ethernet/cadence/macb_main.c
> @@ -271,12 +271,10 @@ static bool hw_is_gem(void __iomem *addr, bool native_io)
>
> static void macb_set_hwaddr(struct macb *bp)
> {
> - u32 bottom;
> - u16 top;
> + u32 bottom = get_unaligned_le32(bp->dev->dev_addr);
> + u16 top = get_unaligned_le16(bp->dev->dev_addr + 4);
>
please change the order as per reverse xmas tree.
> - bottom = cpu_to_le32(*((u32 *)bp->dev->dev_addr));
> macb_or_gem_writel(bp, SA1B, bottom);
> - top = cpu_to_le16(*((u16 *)(bp->dev->dev_addr + 4)));
> macb_or_gem_writel(bp, SA1T, top);
>
> if (gem_has_ptp(bp)) {
>
--
🙏 Vineeth
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH net v5 3/5] net: macb: move ring size computation to functions
2025-09-10 16:15 ` [PATCH net v5 3/5] net: macb: move ring size computation to functions Théo Lebrun
@ 2025-09-11 6:43 ` Karumanchi, Vineeth
2025-09-11 9:14 ` Théo Lebrun
0 siblings, 1 reply; 11+ messages in thread
From: Karumanchi, Vineeth @ 2025-09-11 6:43 UTC (permalink / raw)
To: Théo Lebrun, Andrew Lunn, David S. Miller, Eric Dumazet,
Jakub Kicinski, Paolo Abeni, Rob Herring, Krzysztof Kozlowski,
Conor Dooley, Nicolas Ferre, Claudiu Beznea, Geert Uytterhoeven,
Harini Katakam, Richard Cochran, Russell King
Cc: netdev, devicetree, linux-kernel, Thomas Petazzoni, Tawfik Bayouk
Hi Theo,
On 9/10/2025 9:45 PM, Théo Lebrun wrote:
<...>
> #define DEFAULT_TX_RING_SIZE 512 /* must be power of 2 */
> #define MIN_TX_RING_SIZE 64
> #define MAX_TX_RING_SIZE 4096
> -#define TX_RING_BYTES(bp) (macb_dma_desc_get_size(bp) \
> - * (bp)->tx_ring_size)
>
> /* level of occupied TX descriptors under which we wake up TX process */
> #define MACB_TX_WAKEUP_THRESH(bp) (3 * (bp)->tx_ring_size / 4)
> @@ -2470,11 +2466,20 @@ static void macb_free_rx_buffers(struct macb *bp)
> }
> }
>
> +static unsigned int macb_tx_ring_size_per_queue(struct macb *bp)
> +{
> + return macb_dma_desc_get_size(bp) * bp->tx_ring_size + bp-
> >tx_bd_rd_prefetch;
> +}
> +
> +static unsigned int macb_rx_ring_size_per_queue(struct macb *bp)
> +{
> + return macb_dma_desc_get_size(bp) * bp->rx_ring_size + bp-
> >rx_bd_rd_prefetch;
> +}
> +
it would be good to have these functions as inline.
May be as a separate patch.
<...>
--
🙏 Vineeth
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH net v5 3/5] net: macb: move ring size computation to functions
2025-09-11 6:43 ` Karumanchi, Vineeth
@ 2025-09-11 9:14 ` Théo Lebrun
2025-09-11 23:39 ` Jakub Kicinski
0 siblings, 1 reply; 11+ messages in thread
From: Théo Lebrun @ 2025-09-11 9:14 UTC (permalink / raw)
To: Karumanchi, Vineeth, Andrew Lunn, David S. Miller, Eric Dumazet,
Jakub Kicinski, Paolo Abeni, Rob Herring, Krzysztof Kozlowski,
Conor Dooley, Nicolas Ferre, Claudiu Beznea, Geert Uytterhoeven,
Harini Katakam, Richard Cochran, Russell King
Cc: netdev, devicetree, linux-kernel, Thomas Petazzoni, Tawfik Bayouk
Hello Vineeth,
On Thu Sep 11, 2025 at 8:43 AM CEST, Karumanchi, Vineeth wrote:
> On 9/10/2025 9:45 PM, Théo Lebrun wrote:
>> #define DEFAULT_TX_RING_SIZE 512 /* must be power of 2 */
>> #define MIN_TX_RING_SIZE 64
>> #define MAX_TX_RING_SIZE 4096
>> -#define TX_RING_BYTES(bp) (macb_dma_desc_get_size(bp) \
>> - * (bp)->tx_ring_size)
>>
>> /* level of occupied TX descriptors under which we wake up TX process */
>> #define MACB_TX_WAKEUP_THRESH(bp) (3 * (bp)->tx_ring_size / 4)
>> @@ -2470,11 +2466,20 @@ static void macb_free_rx_buffers(struct macb *bp)
>> }
>> }
>>
>> +static unsigned int macb_tx_ring_size_per_queue(struct macb *bp)
>> +{
>> + return macb_dma_desc_get_size(bp) * bp->tx_ring_size + bp-
>> >tx_bd_rd_prefetch;
>> +}
>> +
>> +static unsigned int macb_rx_ring_size_per_queue(struct macb *bp)
>> +{
>> + return macb_dma_desc_get_size(bp) * bp->rx_ring_size + bp-
>> >rx_bd_rd_prefetch;
>> +}
>> +
>
> it would be good to have these functions as inline.
> May be as a separate patch.
I don't see why? Compilers are clever pieces, they'll know to inline it.
If we added inline to macb_{tx,rx}_ring_size_per_queue(), should we also
add it to macb_dma_desc_get_size()? I do not know, but my compiler
decided to inline it as well. It might make other decisions on other
platforms.
Last point I see: those two functions are not called in the hotpath,
only at alloc & free. If we talk about inline for the theoretical speed
gain, then it doesn't matter in that case. If it is a code size aspect,
then once again the compiler is more aware than myself.
I don't like the tone, but it is part of the kernel doc and is on topic:
https://www.kernel.org/doc/html/latest/process/coding-style.html#the-inline-disease
Thanks Vineeth!
--
Théo Lebrun, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH net v5 5/5] net: macb: avoid dealing with endianness in macb_set_hwaddr()
2025-09-11 3:13 ` Karumanchi, Vineeth
@ 2025-09-11 9:22 ` Théo Lebrun
0 siblings, 0 replies; 11+ messages in thread
From: Théo Lebrun @ 2025-09-11 9:22 UTC (permalink / raw)
To: Karumanchi, Vineeth, Andrew Lunn, David S. Miller, Eric Dumazet,
Jakub Kicinski, Paolo Abeni, Rob Herring, Krzysztof Kozlowski,
Conor Dooley, Nicolas Ferre, Claudiu Beznea, Geert Uytterhoeven,
Harini Katakam, Richard Cochran, Russell King
Cc: netdev, devicetree, linux-kernel, Thomas Petazzoni, Tawfik Bayouk,
Sean Anderson
On Thu Sep 11, 2025 at 5:13 AM CEST, Karumanchi, Vineeth wrote:
> On 9/10/2025 9:45 PM, Théo Lebrun wrote:
>> @@ -271,12 +271,10 @@ static bool hw_is_gem(void __iomem *addr, bool native_io)
>>
>> static void macb_set_hwaddr(struct macb *bp)
>> {
>> - u32 bottom;
>> - u16 top;
>> + u32 bottom = get_unaligned_le32(bp->dev->dev_addr);
>> + u16 top = get_unaligned_le16(bp->dev->dev_addr + 4);
>
> please change the order as per reverse xmas tree.
I had realised this before sending the patch but preferred keeping the
ordering as-is to access dev_addr+0 first then dev_addr+4.
RCT is a strict rule in net so I'll fix it in the next revision. Some
sneaky options were also considered: a spare space in the `u32 bottom`
line, express bottom using `dev_addr + 0`, or renaming variables. :-)
Thanks Vineeth,
--
Théo Lebrun, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH net v5 3/5] net: macb: move ring size computation to functions
2025-09-11 9:14 ` Théo Lebrun
@ 2025-09-11 23:39 ` Jakub Kicinski
0 siblings, 0 replies; 11+ messages in thread
From: Jakub Kicinski @ 2025-09-11 23:39 UTC (permalink / raw)
To: Théo Lebrun
Cc: Karumanchi, Vineeth, Andrew Lunn, David S. Miller, Eric Dumazet,
Paolo Abeni, Rob Herring, Krzysztof Kozlowski, Conor Dooley,
Nicolas Ferre, Claudiu Beznea, Geert Uytterhoeven, Harini Katakam,
Richard Cochran, Russell King, netdev, devicetree, linux-kernel,
Thomas Petazzoni, Tawfik Bayouk
On Thu, 11 Sep 2025 11:14:52 +0200 Théo Lebrun wrote:
> > it would be good to have these functions as inline.
> > May be as a separate patch.
>
> I don't see why? Compilers are clever pieces, they'll know to inline it.
>
> If we added inline to macb_{tx,rx}_ring_size_per_queue(), should we also
> add it to macb_dma_desc_get_size()? I do not know, but my compiler
> decided to inline it as well. It might make other decisions on other
> platforms.
>
> Last point I see: those two functions are not called in the hotpath,
> only at alloc & free. If we talk about inline for the theoretical speed
> gain, then it doesn't matter in that case. If it is a code size aspect,
> then once again the compiler is more aware than myself.
>
> I don't like the tone, but it is part of the kernel doc and is on topic:
> https://www.kernel.org/doc/html/latest/process/coding-style.html#the-inline-disease
👍️ FWIW, please don't sprinkle inlines.
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2025-09-11 23:39 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-09-10 16:15 [PATCH net v5 0/5] net: macb: various fixes Théo Lebrun
2025-09-10 16:15 ` [PATCH net v5 1/5] dt-bindings: net: cdns,macb: allow tsu_clk without tx_clk Théo Lebrun
2025-09-10 16:15 ` [PATCH net v5 2/5] net: macb: remove illusion about TBQPH/RBQPH being per-queue Théo Lebrun
2025-09-10 16:15 ` [PATCH net v5 3/5] net: macb: move ring size computation to functions Théo Lebrun
2025-09-11 6:43 ` Karumanchi, Vineeth
2025-09-11 9:14 ` Théo Lebrun
2025-09-11 23:39 ` Jakub Kicinski
2025-09-10 16:15 ` [PATCH net v5 4/5] net: macb: single dma_alloc_coherent() for DMA descriptors Théo Lebrun
2025-09-10 16:15 ` [PATCH net v5 5/5] net: macb: avoid dealing with endianness in macb_set_hwaddr() Théo Lebrun
2025-09-11 3:13 ` Karumanchi, Vineeth
2025-09-11 9:22 ` Théo Lebrun
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).