* [PATCH net-next 00/11] net: macb: implement context swapping
@ 2026-04-01 16:39 Théo Lebrun
2026-04-01 16:39 ` [PATCH net-next 01/11] net: macb: unify device pointer naming convention Théo Lebrun
` (11 more replies)
0 siblings, 12 replies; 24+ messages in thread
From: Théo Lebrun @ 2026-04-01 16:39 UTC (permalink / raw)
To: Nicolas Ferre, Claudiu Beznea, Andrew Lunn, David S. Miller,
Eric Dumazet, Jakub Kicinski, Paolo Abeni, Richard Cochran,
Russell King
Cc: Paolo Valerio, Conor Dooley, Nicolai Buchwitz,
Vladimir Kondratiev, Gregory CLEMENT, Benoît Monin,
Tawfik Bayouk, Thomas Petazzoni, Maxime Chevallier, netdev,
linux-kernel, Théo Lebrun
MACB has a pretty primitive approach to buffer management. They are all
stored in `struct macb *bp`. On operations that require buffer realloc
(set_ringparam & change_mtu ATM), the only option is to close the
interface, change our global state and re-open the interface.
Two issues:
- It doesn't fly on memory pressured systems; we free our precious
buffers and don't manage to reallocate fully, meaning our machine
just lost its network access.
- Anecdotally, it is pretty slow because it implies a full PHY reinit.
Instead, we shall:
- allocate a new context (including buffers) first
- if it fails, early return without any impact to the interface
- stop interface
- update global state (bp, netdev, etc)
- pass newly allocated buffer pointers to the hardware
- start interface
- free old context
This is what we implement here. Both .set_ringparam() and
.ndo_change_mtu() are covered by this series. In the future,
at least .set_channels() [0], XDP [1] and XSK [2] would benefit.
The change is super intrusive so conflicts will be major. Sorry!
Thanks,
Have a nice day,
Théo
[0]: https://lore.kernel.org/netdev/20260317-macb-set-channels-v4-0-1bd4f4ffcfca@bootlin.com/
[1]: https://lore.kernel.org/netdev/20260323221047.2749577-1-pvalerio@redhat.com/
[2]: https://lore.kernel.org/netdev/20260304-macb-xsk-v1-0-ba2ebe2bdaa3@bootlin.com/
Signed-off-by: Théo Lebrun <theo.lebrun@bootlin.com>
---
Théo Lebrun (11):
net: macb: unify device pointer naming convention
net: macb: unify `struct macb *` naming convention
net: macb: unify queue index variable naming convention and types
net: macb: enforce reverse christmas tree (RCT) convention
net: macb: allocate tieoff descriptor once across device lifetime
net: macb: introduce macb_context struct for buffer management
net: macb: avoid macb_init_rx_buffer_size() modifying state
net: macb: make `struct macb` subset reachable from macb_context struct
net: macb: introduce macb_context_alloc() helper
net: macb: use context swapping in .set_ringparam()
net: macb: use context swapping in .ndo_change_mtu()
drivers/net/ethernet/cadence/macb.h | 119 +-
drivers/net/ethernet/cadence/macb_main.c | 1731 +++++++++++++++++-------------
drivers/net/ethernet/cadence/macb_pci.c | 46 +-
drivers/net/ethernet/cadence/macb_ptp.c | 26 +-
4 files changed, 1090 insertions(+), 832 deletions(-)
---
base-commit: 321d1ee521de1362c22adadbc0ce066050a17783
change-id: 20260401-macb-context-bd0caf20414d
Best regards,
--
Théo Lebrun <theo.lebrun@bootlin.com>
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH net-next 01/11] net: macb: unify device pointer naming convention
2026-04-01 16:39 [PATCH net-next 00/11] net: macb: implement context swapping Théo Lebrun
@ 2026-04-01 16:39 ` Théo Lebrun
2026-04-01 16:39 ` [PATCH net-next 02/11] net: macb: unify `struct macb *` " Théo Lebrun
` (10 subsequent siblings)
11 siblings, 0 replies; 24+ messages in thread
From: Théo Lebrun @ 2026-04-01 16:39 UTC (permalink / raw)
To: Nicolas Ferre, Claudiu Beznea, Andrew Lunn, David S. Miller,
Eric Dumazet, Jakub Kicinski, Paolo Abeni, Richard Cochran,
Russell King
Cc: Paolo Valerio, Conor Dooley, Nicolai Buchwitz,
Vladimir Kondratiev, Gregory CLEMENT, Benoît Monin,
Tawfik Bayouk, Thomas Petazzoni, Maxime Chevallier, netdev,
linux-kernel, Théo Lebrun
Here are all device pointer variable permutations inside MACB:
struct device *dev;
struct net_device *dev;
struct net_device *ndev;
struct net_device *netdev;
struct pci_dev *pdev; // inside macb_pci.c
struct platform_device *pdev;
struct platform_device *plat_dev; // inside macb_pci.c
Unify to this convention:
struct device *dev;
struct net_device *netdev;
struct pci_dev *pci;
struct platform_device *pdev;
Ensure nothing slipped through using ctags tooling:
⟩ ctags -o - --kinds-c='{local}{member}{parameter}' \
--fields='{typeref}' drivers/net/ethernet/cadence/* | \
awk -F"\t" '
$NF~/struct:.*(device|dev) / {print $NF, $1}' | \
sort -u
typeref:struct:device * dev
typeref:struct:in_device * idev // ignored
typeref:struct:net_device * netdev
typeref:struct:pci_dev * pci
typeref:struct:phy_device * phy // ignored
typeref:struct:phy_device * phydev // ignored
typeref:struct:platform_device * pdev
Signed-off-by: Théo Lebrun <theo.lebrun@bootlin.com>
---
drivers/net/ethernet/cadence/macb.h | 14 +-
drivers/net/ethernet/cadence/macb_main.c | 628 ++++++++++++++++---------------
drivers/net/ethernet/cadence/macb_pci.c | 46 +--
drivers/net/ethernet/cadence/macb_ptp.c | 18 +-
4 files changed, 354 insertions(+), 352 deletions(-)
diff --git a/drivers/net/ethernet/cadence/macb.h b/drivers/net/ethernet/cadence/macb.h
index 16527dbab875..d6dd1d356e12 100644
--- a/drivers/net/ethernet/cadence/macb.h
+++ b/drivers/net/ethernet/cadence/macb.h
@@ -1207,8 +1207,8 @@ struct macb_or_gem_ops {
/* MACB-PTP interface: adapt to platform needs. */
struct macb_ptp_info {
- void (*ptp_init)(struct net_device *ndev);
- void (*ptp_remove)(struct net_device *ndev);
+ void (*ptp_init)(struct net_device *netdev);
+ void (*ptp_remove)(struct net_device *netdev);
s32 (*get_ptp_max_adj)(void);
unsigned int (*get_tsu_rate)(struct macb *bp);
int (*get_ts_info)(struct net_device *dev,
@@ -1326,7 +1326,7 @@ struct macb {
struct clk *tx_clk;
struct clk *rx_clk;
struct clk *tsu_clk;
- struct net_device *dev;
+ struct net_device *netdev;
/* Protects hw_stats and ethtool_stats */
spinlock_t stats_lock;
union {
@@ -1406,8 +1406,8 @@ enum macb_bd_control {
TSTAMP_ALL_FRAMES,
};
-void gem_ptp_init(struct net_device *ndev);
-void gem_ptp_remove(struct net_device *ndev);
+void gem_ptp_init(struct net_device *netdev);
+void gem_ptp_remove(struct net_device *netdev);
void gem_ptp_txstamp(struct macb *bp, struct sk_buff *skb, struct macb_dma_desc *desc);
void gem_ptp_rxstamp(struct macb *bp, struct sk_buff *skb, struct macb_dma_desc *desc);
static inline void gem_ptp_do_txstamp(struct macb *bp, struct sk_buff *skb, struct macb_dma_desc *desc)
@@ -1432,8 +1432,8 @@ int gem_set_hwtst(struct net_device *dev,
struct kernel_hwtstamp_config *tstamp_config,
struct netlink_ext_ack *extack);
#else
-static inline void gem_ptp_init(struct net_device *ndev) { }
-static inline void gem_ptp_remove(struct net_device *ndev) { }
+static inline void gem_ptp_init(struct net_device *netdev) { }
+static inline void gem_ptp_remove(struct net_device *netdev) { }
static inline void gem_ptp_do_txstamp(struct macb *bp, struct sk_buff *skb, struct macb_dma_desc *desc) { }
static inline void gem_ptp_do_rxstamp(struct macb *bp, struct sk_buff *skb, struct macb_dma_desc *desc) { }
diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
index 7a48ebe0741f..00bd662b5e46 100644
--- a/drivers/net/ethernet/cadence/macb_main.c
+++ b/drivers/net/ethernet/cadence/macb_main.c
@@ -248,9 +248,9 @@ static void macb_set_hwaddr(struct macb *bp)
u32 bottom;
u16 top;
- bottom = get_unaligned_le32(bp->dev->dev_addr);
+ bottom = get_unaligned_le32(bp->netdev->dev_addr);
macb_or_gem_writel(bp, SA1B, bottom);
- top = get_unaligned_le16(bp->dev->dev_addr + 4);
+ top = get_unaligned_le16(bp->netdev->dev_addr + 4);
macb_or_gem_writel(bp, SA1T, top);
if (gem_has_ptp(bp)) {
@@ -287,13 +287,13 @@ static void macb_get_hwaddr(struct macb *bp)
addr[5] = (top >> 8) & 0xff;
if (is_valid_ether_addr(addr)) {
- eth_hw_addr_set(bp->dev, addr);
+ eth_hw_addr_set(bp->netdev, addr);
return;
}
}
dev_info(&bp->pdev->dev, "invalid hw address, using random\n");
- eth_hw_addr_random(bp->dev);
+ eth_hw_addr_random(bp->netdev);
}
static int macb_mdio_wait_for_idle(struct macb *bp)
@@ -505,12 +505,12 @@ static void macb_set_tx_clk(struct macb *bp, int speed)
ferr = abs(rate_rounded - rate);
ferr = DIV_ROUND_UP(ferr, rate / 100000);
if (ferr > 5)
- netdev_warn(bp->dev,
+ netdev_warn(bp->netdev,
"unable to generate target frequency: %ld Hz\n",
rate);
if (clk_set_rate(bp->tx_clk, rate_rounded))
- netdev_err(bp->dev, "adjusting tx_clk failed.\n");
+ netdev_err(bp->netdev, "adjusting tx_clk failed.\n");
}
static void macb_usx_pcs_link_up(struct phylink_pcs *pcs, unsigned int neg_mode,
@@ -693,8 +693,8 @@ static void macb_tx_lpi_wake(struct macb *bp)
static void macb_mac_disable_tx_lpi(struct phylink_config *config)
{
- struct net_device *ndev = to_net_dev(config->dev);
- struct macb *bp = netdev_priv(ndev);
+ struct net_device *netdev = to_net_dev(config->dev);
+ struct macb *bp = netdev_priv(netdev);
unsigned long flags;
cancel_delayed_work_sync(&bp->tx_lpi_work);
@@ -708,8 +708,8 @@ static void macb_mac_disable_tx_lpi(struct phylink_config *config)
static int macb_mac_enable_tx_lpi(struct phylink_config *config, u32 timer,
bool tx_clk_stop)
{
- struct net_device *ndev = to_net_dev(config->dev);
- struct macb *bp = netdev_priv(ndev);
+ struct net_device *netdev = to_net_dev(config->dev);
+ struct macb *bp = netdev_priv(netdev);
unsigned long flags;
spin_lock_irqsave(&bp->lock, flags);
@@ -728,8 +728,8 @@ static int macb_mac_enable_tx_lpi(struct phylink_config *config, u32 timer,
static void macb_mac_config(struct phylink_config *config, unsigned int mode,
const struct phylink_link_state *state)
{
- struct net_device *ndev = to_net_dev(config->dev);
- struct macb *bp = netdev_priv(ndev);
+ struct net_device *netdev = to_net_dev(config->dev);
+ struct macb *bp = netdev_priv(netdev);
unsigned long flags;
u32 old_ctrl, ctrl;
u32 old_ncr, ncr;
@@ -770,8 +770,8 @@ static void macb_mac_config(struct phylink_config *config, unsigned int mode,
static void macb_mac_link_down(struct phylink_config *config, unsigned int mode,
phy_interface_t interface)
{
- struct net_device *ndev = to_net_dev(config->dev);
- struct macb *bp = netdev_priv(ndev);
+ struct net_device *netdev = to_net_dev(config->dev);
+ struct macb *bp = netdev_priv(netdev);
struct macb_queue *queue;
unsigned int q;
u32 ctrl;
@@ -785,7 +785,7 @@ static void macb_mac_link_down(struct phylink_config *config, unsigned int mode,
ctrl = macb_readl(bp, NCR) & ~(MACB_BIT(RE) | MACB_BIT(TE));
macb_writel(bp, NCR, ctrl);
- netif_tx_stop_all_queues(ndev);
+ netif_tx_stop_all_queues(netdev);
}
/* Use juggling algorithm to left rotate tx ring and tx skb array */
@@ -885,8 +885,8 @@ static void macb_mac_link_up(struct phylink_config *config,
int speed, int duplex,
bool tx_pause, bool rx_pause)
{
- struct net_device *ndev = to_net_dev(config->dev);
- struct macb *bp = netdev_priv(ndev);
+ struct net_device *netdev = to_net_dev(config->dev);
+ struct macb *bp = netdev_priv(netdev);
struct macb_queue *queue;
unsigned long flags;
unsigned int q;
@@ -942,14 +942,14 @@ static void macb_mac_link_up(struct phylink_config *config,
macb_writel(bp, NCR, ctrl | MACB_BIT(RE) | MACB_BIT(TE));
- netif_tx_wake_all_queues(ndev);
+ netif_tx_wake_all_queues(netdev);
}
static struct phylink_pcs *macb_mac_select_pcs(struct phylink_config *config,
phy_interface_t interface)
{
- struct net_device *ndev = to_net_dev(config->dev);
- struct macb *bp = netdev_priv(ndev);
+ struct net_device *netdev = to_net_dev(config->dev);
+ struct macb *bp = netdev_priv(netdev);
if (interface == PHY_INTERFACE_MODE_10GBASER)
return &bp->phylink_usx_pcs;
@@ -978,7 +978,7 @@ static bool macb_phy_handle_exists(struct device_node *dn)
static int macb_phylink_connect(struct macb *bp)
{
struct device_node *dn = bp->pdev->dev.of_node;
- struct net_device *dev = bp->dev;
+ struct net_device *netdev = bp->netdev;
struct phy_device *phydev;
int ret;
@@ -988,7 +988,7 @@ static int macb_phylink_connect(struct macb *bp)
if (!dn || (ret && !macb_phy_handle_exists(dn))) {
phydev = phy_find_first(bp->mii_bus);
if (!phydev) {
- netdev_err(dev, "no PHY found\n");
+ netdev_err(netdev, "no PHY found\n");
return -ENXIO;
}
@@ -997,7 +997,7 @@ static int macb_phylink_connect(struct macb *bp)
}
if (ret) {
- netdev_err(dev, "Could not attach PHY (%d)\n", ret);
+ netdev_err(netdev, "Could not attach PHY (%d)\n", ret);
return ret;
}
@@ -1009,21 +1009,21 @@ static int macb_phylink_connect(struct macb *bp)
static void macb_get_pcs_fixed_state(struct phylink_config *config,
struct phylink_link_state *state)
{
- struct net_device *ndev = to_net_dev(config->dev);
- struct macb *bp = netdev_priv(ndev);
+ struct net_device *netdev = to_net_dev(config->dev);
+ struct macb *bp = netdev_priv(netdev);
state->link = (macb_readl(bp, NSR) & MACB_BIT(NSR_LINK)) != 0;
}
/* based on au1000_eth. c*/
-static int macb_mii_probe(struct net_device *dev)
+static int macb_mii_probe(struct net_device *netdev)
{
- struct macb *bp = netdev_priv(dev);
+ struct macb *bp = netdev_priv(netdev);
bp->phylink_sgmii_pcs.ops = &macb_phylink_pcs_ops;
bp->phylink_usx_pcs.ops = &macb_phylink_usx_pcs_ops;
- bp->phylink_config.dev = &dev->dev;
+ bp->phylink_config.dev = &netdev->dev;
bp->phylink_config.type = PHYLINK_NETDEV;
bp->phylink_config.mac_managed_pm = true;
@@ -1082,7 +1082,7 @@ static int macb_mii_probe(struct net_device *dev)
bp->phylink = phylink_create(&bp->phylink_config, bp->pdev->dev.fwnode,
bp->phy_interface, &macb_phylink_ops);
if (IS_ERR(bp->phylink)) {
- netdev_err(dev, "Could not create a phylink instance (%ld)\n",
+ netdev_err(netdev, "Could not create a phylink instance (%ld)\n",
PTR_ERR(bp->phylink));
return PTR_ERR(bp->phylink);
}
@@ -1129,7 +1129,7 @@ static int macb_mii_init(struct macb *bp)
*/
mdio_np = of_get_child_by_name(np, "mdio");
if (!mdio_np && of_phy_is_fixed_link(np))
- return macb_mii_probe(bp->dev);
+ return macb_mii_probe(bp->netdev);
/* Enable management port */
macb_writel(bp, NCR, MACB_BIT(MPE));
@@ -1150,13 +1150,13 @@ static int macb_mii_init(struct macb *bp)
bp->mii_bus->priv = bp;
bp->mii_bus->parent = &bp->pdev->dev;
- dev_set_drvdata(&bp->dev->dev, bp->mii_bus);
+ dev_set_drvdata(&bp->netdev->dev, bp->mii_bus);
err = macb_mdiobus_register(bp, mdio_np);
if (err)
goto err_out_free_mdiobus;
- err = macb_mii_probe(bp->dev);
+ err = macb_mii_probe(bp->netdev);
if (err)
goto err_out_unregister_bus;
@@ -1264,7 +1264,7 @@ static void macb_tx_error_task(struct work_struct *work)
unsigned long flags;
queue_index = queue - bp->queues;
- netdev_vdbg(bp->dev, "macb_tx_error_task: q = %u, t = %u, h = %u\n",
+ netdev_vdbg(bp->netdev, "macb_tx_error_task: q = %u, t = %u, h = %u\n",
queue_index, queue->tx_tail, queue->tx_head);
/* Prevent the queue NAPI TX poll from running, as it calls
@@ -1277,14 +1277,14 @@ static void macb_tx_error_task(struct work_struct *work)
spin_lock_irqsave(&bp->lock, flags);
/* Make sure nobody is trying to queue up new packets */
- netif_tx_stop_all_queues(bp->dev);
+ netif_tx_stop_all_queues(bp->netdev);
/* Stop transmission now
* (in case we have just queued new packets)
* macb/gem must be halted to write TBQP register
*/
if (macb_halt_tx(bp)) {
- netdev_err(bp->dev, "BUG: halt tx timed out\n");
+ netdev_err(bp->netdev, "BUG: halt tx timed out\n");
macb_writel(bp, NCR, macb_readl(bp, NCR) & (~MACB_BIT(TE)));
halt_timeout = true;
}
@@ -1313,13 +1313,13 @@ static void macb_tx_error_task(struct work_struct *work)
* since it's the only one written back by the hardware
*/
if (!(ctrl & MACB_BIT(TX_BUF_EXHAUSTED))) {
- netdev_vdbg(bp->dev, "txerr skb %u (data %p) TX complete\n",
+ netdev_vdbg(bp->netdev, "txerr skb %u (data %p) TX complete\n",
macb_tx_ring_wrap(bp, tail),
skb->data);
- bp->dev->stats.tx_packets++;
+ bp->netdev->stats.tx_packets++;
queue->stats.tx_packets++;
packets++;
- bp->dev->stats.tx_bytes += skb->len;
+ bp->netdev->stats.tx_bytes += skb->len;
queue->stats.tx_bytes += skb->len;
bytes += skb->len;
}
@@ -1329,7 +1329,7 @@ static void macb_tx_error_task(struct work_struct *work)
* those. Statistics are updated by hardware.
*/
if (ctrl & MACB_BIT(TX_BUF_EXHAUSTED))
- netdev_err(bp->dev,
+ netdev_err(bp->netdev,
"BUG: TX buffers exhausted mid-frame\n");
desc->ctrl = ctrl | MACB_BIT(TX_USED);
@@ -1338,7 +1338,7 @@ static void macb_tx_error_task(struct work_struct *work)
macb_tx_unmap(bp, tx_skb, 0);
}
- netdev_tx_completed_queue(netdev_get_tx_queue(bp->dev, queue_index),
+ netdev_tx_completed_queue(netdev_get_tx_queue(bp->netdev, queue_index),
packets, bytes);
/* Set end of TX queue */
@@ -1363,7 +1363,7 @@ static void macb_tx_error_task(struct work_struct *work)
macb_writel(bp, NCR, macb_readl(bp, NCR) | MACB_BIT(TE));
/* Now we are ready to start transmission again */
- netif_tx_start_all_queues(bp->dev);
+ netif_tx_start_all_queues(bp->netdev);
macb_writel(bp, NCR, macb_readl(bp, NCR) | MACB_BIT(TSTART));
spin_unlock_irqrestore(&bp->lock, flags);
@@ -1442,12 +1442,12 @@ static int macb_tx_complete(struct macb_queue *queue, int budget)
!ptp_one_step_sync(skb))
gem_ptp_do_txstamp(bp, skb, desc);
- netdev_vdbg(bp->dev, "skb %u (data %p) TX complete\n",
+ netdev_vdbg(bp->netdev, "skb %u (data %p) TX complete\n",
macb_tx_ring_wrap(bp, tail),
skb->data);
- bp->dev->stats.tx_packets++;
+ bp->netdev->stats.tx_packets++;
queue->stats.tx_packets++;
- bp->dev->stats.tx_bytes += skb->len;
+ bp->netdev->stats.tx_bytes += skb->len;
queue->stats.tx_bytes += skb->len;
packets++;
bytes += skb->len;
@@ -1465,14 +1465,14 @@ static int macb_tx_complete(struct macb_queue *queue, int budget)
}
}
- netdev_tx_completed_queue(netdev_get_tx_queue(bp->dev, queue_index),
+ netdev_tx_completed_queue(netdev_get_tx_queue(bp->netdev, queue_index),
packets, bytes);
queue->tx_tail = tail;
- if (__netif_subqueue_stopped(bp->dev, queue_index) &&
+ if (__netif_subqueue_stopped(bp->netdev, queue_index) &&
CIRC_CNT(queue->tx_head, queue->tx_tail,
bp->tx_ring_size) <= MACB_TX_WAKEUP_THRESH(bp))
- netif_wake_subqueue(bp->dev, queue_index);
+ netif_wake_subqueue(bp->netdev, queue_index);
spin_unlock_irqrestore(&queue->tx_ptr_lock, flags);
if (packets)
@@ -1500,9 +1500,9 @@ static void gem_rx_refill(struct macb_queue *queue)
if (!queue->rx_skbuff[entry]) {
/* allocate sk_buff for this free entry in ring */
- skb = netdev_alloc_skb(bp->dev, bp->rx_buffer_size);
+ skb = netdev_alloc_skb(bp->netdev, bp->rx_buffer_size);
if (unlikely(!skb)) {
- netdev_err(bp->dev,
+ netdev_err(bp->netdev,
"Unable to allocate sk_buff\n");
break;
}
@@ -1551,8 +1551,8 @@ static void gem_rx_refill(struct macb_queue *queue)
/* Make descriptor updates visible to hardware */
wmb();
- netdev_vdbg(bp->dev, "rx ring: queue: %p, prepared head %d, tail %d\n",
- queue, queue->rx_prepared_head, queue->rx_tail);
+ netdev_vdbg(bp->netdev, "rx ring: queue: %p, prepared head %d, tail %d\n",
+ queue, queue->rx_prepared_head, queue->rx_tail);
}
/* Mark DMA descriptors from begin up to and not including end as unused */
@@ -1612,17 +1612,17 @@ static int gem_rx(struct macb_queue *queue, struct napi_struct *napi,
count++;
if (!(ctrl & MACB_BIT(RX_SOF) && ctrl & MACB_BIT(RX_EOF))) {
- netdev_err(bp->dev,
+ netdev_err(bp->netdev,
"not whole frame pointed by descriptor\n");
- bp->dev->stats.rx_dropped++;
+ bp->netdev->stats.rx_dropped++;
queue->stats.rx_dropped++;
break;
}
skb = queue->rx_skbuff[entry];
if (unlikely(!skb)) {
- netdev_err(bp->dev,
+ netdev_err(bp->netdev,
"inconsistent Rx descriptor chain\n");
- bp->dev->stats.rx_dropped++;
+ bp->netdev->stats.rx_dropped++;
queue->stats.rx_dropped++;
break;
}
@@ -1630,28 +1630,28 @@ static int gem_rx(struct macb_queue *queue, struct napi_struct *napi,
queue->rx_skbuff[entry] = NULL;
len = ctrl & bp->rx_frm_len_mask;
- netdev_vdbg(bp->dev, "gem_rx %u (len %u)\n", entry, len);
+ netdev_vdbg(bp->netdev, "gem_rx %u (len %u)\n", entry, len);
skb_put(skb, len);
dma_unmap_single(&bp->pdev->dev, addr,
bp->rx_buffer_size, DMA_FROM_DEVICE);
- skb->protocol = eth_type_trans(skb, bp->dev);
+ skb->protocol = eth_type_trans(skb, bp->netdev);
skb_checksum_none_assert(skb);
- if (bp->dev->features & NETIF_F_RXCSUM &&
- !(bp->dev->flags & IFF_PROMISC) &&
+ if (bp->netdev->features & NETIF_F_RXCSUM &&
+ !(bp->netdev->flags & IFF_PROMISC) &&
GEM_BFEXT(RX_CSUM, ctrl) & GEM_RX_CSUM_CHECKED_MASK)
skb->ip_summed = CHECKSUM_UNNECESSARY;
- bp->dev->stats.rx_packets++;
+ bp->netdev->stats.rx_packets++;
queue->stats.rx_packets++;
- bp->dev->stats.rx_bytes += skb->len;
+ bp->netdev->stats.rx_bytes += skb->len;
queue->stats.rx_bytes += skb->len;
gem_ptp_do_rxstamp(bp, skb, desc);
#if defined(DEBUG) && defined(VERBOSE_DEBUG)
- netdev_vdbg(bp->dev, "received skb of length %u, csum: %08x\n",
+ netdev_vdbg(bp->netdev, "received skb of length %u, csum: %08x\n",
skb->len, skb->csum);
print_hex_dump(KERN_DEBUG, " mac: ", DUMP_PREFIX_ADDRESS, 16, 1,
skb_mac_header(skb), 16, true);
@@ -1680,9 +1680,9 @@ static int macb_rx_frame(struct macb_queue *queue, struct napi_struct *napi,
desc = macb_rx_desc(queue, last_frag);
len = desc->ctrl & bp->rx_frm_len_mask;
- netdev_vdbg(bp->dev, "macb_rx_frame frags %u - %u (len %u)\n",
- macb_rx_ring_wrap(bp, first_frag),
- macb_rx_ring_wrap(bp, last_frag), len);
+ netdev_vdbg(bp->netdev, "macb_rx_frame frags %u - %u (len %u)\n",
+ macb_rx_ring_wrap(bp, first_frag),
+ macb_rx_ring_wrap(bp, last_frag), len);
/* The ethernet header starts NET_IP_ALIGN bytes into the
* first buffer. Since the header is 14 bytes, this makes the
@@ -1692,9 +1692,9 @@ static int macb_rx_frame(struct macb_queue *queue, struct napi_struct *napi,
* the two padding bytes into the skb so that we avoid hitting
* the slowpath in memcpy(), and pull them off afterwards.
*/
- skb = netdev_alloc_skb(bp->dev, len + NET_IP_ALIGN);
+ skb = netdev_alloc_skb(bp->netdev, len + NET_IP_ALIGN);
if (!skb) {
- bp->dev->stats.rx_dropped++;
+ bp->netdev->stats.rx_dropped++;
for (frag = first_frag; ; frag++) {
desc = macb_rx_desc(queue, frag);
desc->addr &= ~MACB_BIT(RX_USED);
@@ -1738,11 +1738,11 @@ static int macb_rx_frame(struct macb_queue *queue, struct napi_struct *napi,
wmb();
__skb_pull(skb, NET_IP_ALIGN);
- skb->protocol = eth_type_trans(skb, bp->dev);
+ skb->protocol = eth_type_trans(skb, bp->netdev);
- bp->dev->stats.rx_packets++;
- bp->dev->stats.rx_bytes += skb->len;
- netdev_vdbg(bp->dev, "received skb of length %u, csum: %08x\n",
+ bp->netdev->stats.rx_packets++;
+ bp->netdev->stats.rx_bytes += skb->len;
+ netdev_vdbg(bp->netdev, "received skb of length %u, csum: %08x\n",
skb->len, skb->csum);
napi_gro_receive(napi, skb);
@@ -1822,7 +1822,7 @@ static int macb_rx(struct macb_queue *queue, struct napi_struct *napi,
unsigned long flags;
u32 ctrl;
- netdev_err(bp->dev, "RX queue corruption: reset it\n");
+ netdev_err(bp->netdev, "RX queue corruption: reset it\n");
spin_lock_irqsave(&bp->lock, flags);
@@ -1869,7 +1869,7 @@ static int macb_rx_poll(struct napi_struct *napi, int budget)
work_done = bp->macbgem_ops.mog_rx(queue, napi, budget);
- netdev_vdbg(bp->dev, "RX poll: queue = %u, work_done = %d, budget = %d\n",
+ netdev_vdbg(bp->netdev, "RX poll: queue = %u, work_done = %d, budget = %d\n",
(unsigned int)(queue - bp->queues), work_done, budget);
if (work_done < budget && napi_complete_done(napi, work_done)) {
@@ -1889,7 +1889,7 @@ static int macb_rx_poll(struct napi_struct *napi, int budget)
queue_writel(queue, IDR, bp->rx_intr_mask);
if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
queue_writel(queue, ISR, MACB_BIT(RCOMP));
- netdev_vdbg(bp->dev, "poll: packets pending, reschedule\n");
+ netdev_vdbg(bp->netdev, "poll: packets pending, reschedule\n");
napi_schedule(napi);
}
}
@@ -1953,11 +1953,11 @@ static int macb_tx_poll(struct napi_struct *napi, int budget)
rmb(); // ensure txubr_pending is up to date
if (queue->txubr_pending) {
queue->txubr_pending = false;
- netdev_vdbg(bp->dev, "poll: tx restart\n");
+ netdev_vdbg(bp->netdev, "poll: tx restart\n");
macb_tx_restart(queue);
}
- netdev_vdbg(bp->dev, "TX poll: queue = %u, work_done = %d, budget = %d\n",
+ netdev_vdbg(bp->netdev, "TX poll: queue = %u, work_done = %d, budget = %d\n",
(unsigned int)(queue - bp->queues), work_done, budget);
if (work_done < budget && napi_complete_done(napi, work_done)) {
@@ -1977,7 +1977,7 @@ static int macb_tx_poll(struct napi_struct *napi, int budget)
queue_writel(queue, IDR, MACB_BIT(TCOMP));
if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
queue_writel(queue, ISR, MACB_BIT(TCOMP));
- netdev_vdbg(bp->dev, "TX poll: packets pending, reschedule\n");
+ netdev_vdbg(bp->netdev, "TX poll: packets pending, reschedule\n");
napi_schedule(napi);
}
}
@@ -1988,7 +1988,7 @@ static int macb_tx_poll(struct napi_struct *napi, int budget)
static void macb_hresp_error_task(struct work_struct *work)
{
struct macb *bp = from_work(bp, work, hresp_err_bh_work);
- struct net_device *dev = bp->dev;
+ struct net_device *netdev = bp->netdev;
struct macb_queue *queue;
unsigned int q;
u32 ctrl;
@@ -2002,8 +2002,8 @@ static void macb_hresp_error_task(struct work_struct *work)
ctrl &= ~(MACB_BIT(RE) | MACB_BIT(TE));
macb_writel(bp, NCR, ctrl);
- netif_tx_stop_all_queues(dev);
- netif_carrier_off(dev);
+ netif_tx_stop_all_queues(netdev);
+ netif_carrier_off(netdev);
bp->macbgem_ops.mog_init_rings(bp);
@@ -2020,8 +2020,8 @@ static void macb_hresp_error_task(struct work_struct *work)
ctrl |= MACB_BIT(RE) | MACB_BIT(TE);
macb_writel(bp, NCR, ctrl);
- netif_carrier_on(dev);
- netif_tx_start_all_queues(dev);
+ netif_carrier_on(netdev);
+ netif_tx_start_all_queues(netdev);
}
static irqreturn_t macb_wol_interrupt(int irq, void *dev_id)
@@ -2040,7 +2040,7 @@ static irqreturn_t macb_wol_interrupt(int irq, void *dev_id)
if (status & MACB_BIT(WOL)) {
queue_writel(queue, IDR, MACB_BIT(WOL));
macb_writel(bp, WOL, 0);
- netdev_vdbg(bp->dev, "MACB WoL: queue = %u, isr = 0x%08lx\n",
+ netdev_vdbg(bp->netdev, "MACB WoL: queue = %u, isr = 0x%08lx\n",
(unsigned int)(queue - bp->queues),
(unsigned long)status);
if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
@@ -2069,7 +2069,7 @@ static irqreturn_t gem_wol_interrupt(int irq, void *dev_id)
if (status & GEM_BIT(WOL)) {
queue_writel(queue, IDR, GEM_BIT(WOL));
gem_writel(bp, WOL, 0);
- netdev_vdbg(bp->dev, "GEM WoL: queue = %u, isr = 0x%08lx\n",
+ netdev_vdbg(bp->netdev, "GEM WoL: queue = %u, isr = 0x%08lx\n",
(unsigned int)(queue - bp->queues),
(unsigned long)status);
if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
@@ -2086,7 +2086,7 @@ static irqreturn_t macb_interrupt(int irq, void *dev_id)
{
struct macb_queue *queue = dev_id;
struct macb *bp = queue->bp;
- struct net_device *dev = bp->dev;
+ struct net_device *netdev = bp->netdev;
u32 status, ctrl;
status = queue_readl(queue, ISR);
@@ -2098,14 +2098,14 @@ static irqreturn_t macb_interrupt(int irq, void *dev_id)
while (status) {
/* close possible race with dev_close */
- if (unlikely(!netif_running(dev))) {
+ if (unlikely(!netif_running(netdev))) {
queue_writel(queue, IDR, -1);
if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
queue_writel(queue, ISR, -1);
break;
}
- netdev_vdbg(bp->dev, "queue = %u, isr = 0x%08lx\n",
+ netdev_vdbg(bp->netdev, "queue = %u, isr = 0x%08lx\n",
(unsigned int)(queue - bp->queues),
(unsigned long)status);
@@ -2121,7 +2121,7 @@ static irqreturn_t macb_interrupt(int irq, void *dev_id)
queue_writel(queue, ISR, MACB_BIT(RCOMP));
if (napi_schedule_prep(&queue->napi_rx)) {
- netdev_vdbg(bp->dev, "scheduling RX softirq\n");
+ netdev_vdbg(bp->netdev, "scheduling RX softirq\n");
__napi_schedule(&queue->napi_rx);
}
}
@@ -2139,7 +2139,7 @@ static irqreturn_t macb_interrupt(int irq, void *dev_id)
}
if (napi_schedule_prep(&queue->napi_tx)) {
- netdev_vdbg(bp->dev, "scheduling TX softirq\n");
+ netdev_vdbg(bp->netdev, "scheduling TX softirq\n");
__napi_schedule(&queue->napi_tx);
}
}
@@ -2190,7 +2190,7 @@ static irqreturn_t macb_interrupt(int irq, void *dev_id)
if (status & MACB_BIT(HRESP)) {
queue_work(system_bh_wq, &bp->hresp_err_bh_work);
- netdev_err(dev, "DMA bus error: HRESP not OK\n");
+ netdev_err(netdev, "DMA bus error: HRESP not OK\n");
if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
queue_writel(queue, ISR, MACB_BIT(HRESP));
@@ -2207,9 +2207,9 @@ static irqreturn_t macb_interrupt(int irq, void *dev_id)
/* Polling receive - used by netconsole and other diagnostic tools
* to allow network i/o with interrupts disabled.
*/
-static void macb_poll_controller(struct net_device *dev)
+static void macb_poll_controller(struct net_device *netdev)
{
- struct macb *bp = netdev_priv(dev);
+ struct macb *bp = netdev_priv(netdev);
struct macb_queue *queue;
unsigned long flags;
unsigned int q;
@@ -2303,7 +2303,7 @@ static unsigned int macb_tx_map(struct macb *bp,
/* Should never happen */
if (unlikely(!tx_skb)) {
- netdev_err(bp->dev, "BUG! empty skb!\n");
+ netdev_err(bp->netdev, "BUG! empty skb!\n");
return 0;
}
@@ -2354,7 +2354,7 @@ static unsigned int macb_tx_map(struct macb *bp,
if (i == queue->tx_head) {
ctrl |= MACB_BF(TX_LSO, lso_ctrl);
ctrl |= MACB_BF(TX_TCP_SEQ_SRC, seq_ctrl);
- if ((bp->dev->features & NETIF_F_HW_CSUM) &&
+ if ((bp->netdev->features & NETIF_F_HW_CSUM) &&
skb->ip_summed != CHECKSUM_PARTIAL && !lso_ctrl &&
!ptp_one_step_sync(skb))
ctrl |= MACB_BIT(TX_NOCRC);
@@ -2378,7 +2378,7 @@ static unsigned int macb_tx_map(struct macb *bp,
return 0;
dma_error:
- netdev_err(bp->dev, "TX DMA map failed\n");
+ netdev_err(bp->netdev, "TX DMA map failed\n");
for (i = queue->tx_head; i != tx_head; i++) {
tx_skb = macb_tx_skb(queue, i);
@@ -2390,7 +2390,7 @@ static unsigned int macb_tx_map(struct macb *bp,
}
static netdev_features_t macb_features_check(struct sk_buff *skb,
- struct net_device *dev,
+ struct net_device *netdev,
netdev_features_t features)
{
unsigned int nr_frags, f;
@@ -2442,7 +2442,7 @@ static inline int macb_clear_csum(struct sk_buff *skb)
return 0;
}
-static int macb_pad_and_fcs(struct sk_buff **skb, struct net_device *ndev)
+static int macb_pad_and_fcs(struct sk_buff **skb, struct net_device *netdev)
{
bool cloned = skb_cloned(*skb) || skb_header_cloned(*skb) ||
skb_is_nonlinear(*skb);
@@ -2451,7 +2451,7 @@ static int macb_pad_and_fcs(struct sk_buff **skb, struct net_device *ndev)
struct sk_buff *nskb;
u32 fcs;
- if (!(ndev->features & NETIF_F_HW_CSUM) ||
+ if (!(netdev->features & NETIF_F_HW_CSUM) ||
!((*skb)->ip_summed != CHECKSUM_PARTIAL) ||
skb_shinfo(*skb)->gso_size || ptp_one_step_sync(*skb))
return 0;
@@ -2493,10 +2493,11 @@ static int macb_pad_and_fcs(struct sk_buff **skb, struct net_device *ndev)
return 0;
}
-static netdev_tx_t macb_start_xmit(struct sk_buff *skb, struct net_device *dev)
+static netdev_tx_t macb_start_xmit(struct sk_buff *skb,
+ struct net_device *netdev)
{
u16 queue_index = skb_get_queue_mapping(skb);
- struct macb *bp = netdev_priv(dev);
+ struct macb *bp = netdev_priv(netdev);
struct macb_queue *queue = &bp->queues[queue_index];
unsigned int desc_cnt, nr_frags, frag_size, f;
unsigned int hdrlen;
@@ -2509,7 +2510,7 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *skb, struct net_device *dev)
return ret;
}
- if (macb_pad_and_fcs(&skb, dev)) {
+ if (macb_pad_and_fcs(&skb, netdev)) {
dev_kfree_skb_any(skb);
return ret;
}
@@ -2528,7 +2529,7 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *skb, struct net_device *dev)
else
hdrlen = skb_tcp_all_headers(skb);
if (skb_headlen(skb) < hdrlen) {
- netdev_err(bp->dev, "Error - LSO headers fragmented!!!\n");
+ netdev_err(bp->netdev, "Error - LSO headers fragmented!!!\n");
/* if this is required, would need to copy to single buffer */
return NETDEV_TX_BUSY;
}
@@ -2536,7 +2537,7 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *skb, struct net_device *dev)
hdrlen = umin(skb_headlen(skb), bp->max_tx_length);
#if defined(DEBUG) && defined(VERBOSE_DEBUG)
- netdev_vdbg(bp->dev,
+ netdev_vdbg(bp->netdev,
"start_xmit: queue %hu len %u head %p data %p tail %p end %p\n",
queue_index, skb->len, skb->head, skb->data,
skb_tail_pointer(skb), skb_end_pointer(skb));
@@ -2564,8 +2565,8 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *skb, struct net_device *dev)
/* This is a hard error, log it. */
if (CIRC_SPACE(queue->tx_head, queue->tx_tail,
bp->tx_ring_size) < desc_cnt) {
- netif_stop_subqueue(dev, queue_index);
- netdev_dbg(bp->dev, "tx_head = %u, tx_tail = %u\n",
+ netif_stop_subqueue(netdev, queue_index);
+ netdev_dbg(netdev, "tx_head = %u, tx_tail = %u\n",
queue->tx_head, queue->tx_tail);
ret = NETDEV_TX_BUSY;
goto unlock;
@@ -2580,7 +2581,7 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *skb, struct net_device *dev)
/* Make newly initialized descriptor visible to hardware */
wmb();
skb_tx_timestamp(skb);
- netdev_tx_sent_queue(netdev_get_tx_queue(bp->dev, queue_index),
+ netdev_tx_sent_queue(netdev_get_tx_queue(bp->netdev, queue_index),
skb->len);
spin_lock(&bp->lock);
@@ -2589,7 +2590,7 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *skb, struct net_device *dev)
spin_unlock(&bp->lock);
if (CIRC_SPACE(queue->tx_head, queue->tx_tail, bp->tx_ring_size) < 1)
- netif_stop_subqueue(dev, queue_index);
+ netif_stop_subqueue(netdev, queue_index);
unlock:
spin_unlock_irqrestore(&queue->tx_ptr_lock, flags);
@@ -2605,7 +2606,7 @@ static void macb_init_rx_buffer_size(struct macb *bp, size_t size)
bp->rx_buffer_size = MIN(size, RX_BUFFER_MAX);
if (bp->rx_buffer_size % RX_BUFFER_MULTIPLE) {
- netdev_dbg(bp->dev,
+ netdev_dbg(bp->netdev,
"RX buffer must be multiple of %d bytes, expanding\n",
RX_BUFFER_MULTIPLE);
bp->rx_buffer_size =
@@ -2613,8 +2614,8 @@ static void macb_init_rx_buffer_size(struct macb *bp, size_t size)
}
}
- netdev_dbg(bp->dev, "mtu [%u] rx_buffer_size [%zu]\n",
- bp->dev->mtu, bp->rx_buffer_size);
+ netdev_dbg(bp->netdev, "mtu [%u] rx_buffer_size [%zu]\n",
+ bp->netdev->mtu, bp->rx_buffer_size);
}
static void gem_free_rx_buffers(struct macb *bp)
@@ -2713,7 +2714,7 @@ static int gem_alloc_rx_buffers(struct macb *bp)
if (!queue->rx_skbuff)
return -ENOMEM;
else
- netdev_dbg(bp->dev,
+ netdev_dbg(bp->netdev,
"Allocated %d RX struct sk_buff entries at %p\n",
bp->rx_ring_size, queue->rx_skbuff);
}
@@ -2731,7 +2732,7 @@ static int macb_alloc_rx_buffers(struct macb *bp)
if (!queue->rx_buffers)
return -ENOMEM;
- netdev_dbg(bp->dev,
+ netdev_dbg(bp->netdev,
"Allocated RX buffers of %d bytes at %08lx (mapped %p)\n",
size, (unsigned long)queue->rx_buffers_dma, queue->rx_buffers);
return 0;
@@ -2757,14 +2758,14 @@ static int macb_alloc_consistent(struct macb *bp)
tx = dma_alloc_coherent(dev, size, &tx_dma, GFP_KERNEL);
if (!tx || upper_32_bits(tx_dma) != upper_32_bits(tx_dma + size - 1))
goto out_err;
- netdev_dbg(bp->dev, "Allocated %zu bytes for %u TX rings at %08lx (mapped %p)\n",
+ netdev_dbg(bp->netdev, "Allocated %zu bytes for %u TX rings at %08lx (mapped %p)\n",
size, bp->num_queues, (unsigned long)tx_dma, tx);
size = bp->num_queues * macb_rx_ring_size_per_queue(bp);
rx = dma_alloc_coherent(dev, size, &rx_dma, GFP_KERNEL);
if (!rx || upper_32_bits(rx_dma) != upper_32_bits(rx_dma + size - 1))
goto out_err;
- netdev_dbg(bp->dev, "Allocated %zu bytes for %u RX rings at %08lx (mapped %p)\n",
+ netdev_dbg(bp->netdev, "Allocated %zu bytes for %u RX rings at %08lx (mapped %p)\n",
size, bp->num_queues, (unsigned long)rx_dma, rx);
for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) {
@@ -2993,7 +2994,7 @@ static void macb_configure_dma(struct macb *bp)
else
dmacfg |= GEM_BIT(ENDIA_DESC); /* CPU in big endian */
- if (bp->dev->features & NETIF_F_HW_CSUM)
+ if (bp->netdev->features & NETIF_F_HW_CSUM)
dmacfg |= GEM_BIT(TXCOEN);
else
dmacfg &= ~GEM_BIT(TXCOEN);
@@ -3003,7 +3004,7 @@ static void macb_configure_dma(struct macb *bp)
dmacfg |= GEM_BIT(ADDR64);
if (macb_dma_ptp(bp))
dmacfg |= GEM_BIT(RXEXT) | GEM_BIT(TXEXT);
- netdev_dbg(bp->dev, "Cadence configure DMA with 0x%08x\n",
+ netdev_dbg(bp->netdev, "Cadence configure DMA with 0x%08x\n",
dmacfg);
gem_writel(bp, DMACFG, dmacfg);
}
@@ -3027,11 +3028,11 @@ static void macb_init_hw(struct macb *bp)
config |= MACB_BIT(JFRAME); /* Enable jumbo frames */
else
config |= MACB_BIT(BIG); /* Receive oversized frames */
- if (bp->dev->flags & IFF_PROMISC)
+ if (bp->netdev->flags & IFF_PROMISC)
config |= MACB_BIT(CAF); /* Copy All Frames */
- else if (macb_is_gem(bp) && bp->dev->features & NETIF_F_RXCSUM)
+ else if (macb_is_gem(bp) && bp->netdev->features & NETIF_F_RXCSUM)
config |= GEM_BIT(RXCOEN);
- if (!(bp->dev->flags & IFF_BROADCAST))
+ if (!(bp->netdev->flags & IFF_BROADCAST))
config |= MACB_BIT(NBC); /* No BroadCast */
config |= macb_dbw(bp);
macb_writel(bp, NCFGR, config);
@@ -3105,17 +3106,17 @@ static int hash_get_index(__u8 *addr)
}
/* Add multicast addresses to the internal multicast-hash table. */
-static void macb_sethashtable(struct net_device *dev)
+static void macb_sethashtable(struct net_device *netdev)
{
struct netdev_hw_addr *ha;
unsigned long mc_filter[2];
unsigned int bitnr;
- struct macb *bp = netdev_priv(dev);
+ struct macb *bp = netdev_priv(netdev);
mc_filter[0] = 0;
mc_filter[1] = 0;
- netdev_for_each_mc_addr(ha, dev) {
+ netdev_for_each_mc_addr(ha, netdev) {
bitnr = hash_get_index(ha->addr);
mc_filter[bitnr >> 5] |= 1 << (bitnr & 31);
}
@@ -3125,14 +3126,14 @@ static void macb_sethashtable(struct net_device *dev)
}
/* Enable/Disable promiscuous and multicast modes. */
-static void macb_set_rx_mode(struct net_device *dev)
+static void macb_set_rx_mode(struct net_device *netdev)
{
unsigned long cfg;
- struct macb *bp = netdev_priv(dev);
+ struct macb *bp = netdev_priv(netdev);
cfg = macb_readl(bp, NCFGR);
- if (dev->flags & IFF_PROMISC) {
+ if (netdev->flags & IFF_PROMISC) {
/* Enable promiscuous mode */
cfg |= MACB_BIT(CAF);
@@ -3144,20 +3145,20 @@ static void macb_set_rx_mode(struct net_device *dev)
cfg &= ~MACB_BIT(CAF);
/* Enable RX checksum offload only if requested */
- if (macb_is_gem(bp) && dev->features & NETIF_F_RXCSUM)
+ if (macb_is_gem(bp) && netdev->features & NETIF_F_RXCSUM)
cfg |= GEM_BIT(RXCOEN);
}
- if (dev->flags & IFF_ALLMULTI) {
+ if (netdev->flags & IFF_ALLMULTI) {
/* Enable all multicast mode */
macb_or_gem_writel(bp, HRB, -1);
macb_or_gem_writel(bp, HRT, -1);
cfg |= MACB_BIT(NCFGR_MTI);
- } else if (!netdev_mc_empty(dev)) {
+ } else if (!netdev_mc_empty(netdev)) {
/* Enable specific multicasts */
- macb_sethashtable(dev);
+ macb_sethashtable(netdev);
cfg |= MACB_BIT(NCFGR_MTI);
- } else if (dev->flags & (~IFF_ALLMULTI)) {
+ } else if (netdev->flags & (~IFF_ALLMULTI)) {
/* Disable all multicast mode */
macb_or_gem_writel(bp, HRB, 0);
macb_or_gem_writel(bp, HRT, 0);
@@ -3167,15 +3168,15 @@ static void macb_set_rx_mode(struct net_device *dev)
macb_writel(bp, NCFGR, cfg);
}
-static int macb_open(struct net_device *dev)
+static int macb_open(struct net_device *netdev)
{
- size_t bufsz = dev->mtu + ETH_HLEN + ETH_FCS_LEN + NET_IP_ALIGN;
- struct macb *bp = netdev_priv(dev);
+ size_t bufsz = netdev->mtu + ETH_HLEN + ETH_FCS_LEN + NET_IP_ALIGN;
+ struct macb *bp = netdev_priv(netdev);
struct macb_queue *queue;
unsigned int q;
int err;
- netdev_dbg(bp->dev, "open\n");
+ netdev_dbg(bp->netdev, "open\n");
err = pm_runtime_resume_and_get(&bp->pdev->dev);
if (err < 0)
@@ -3186,7 +3187,7 @@ static int macb_open(struct net_device *dev)
err = macb_alloc_consistent(bp);
if (err) {
- netdev_err(dev, "Unable to allocate DMA memory (error %d)\n",
+ netdev_err(netdev, "Unable to allocate DMA memory (error %d)\n",
err);
goto pm_exit;
}
@@ -3213,10 +3214,10 @@ static int macb_open(struct net_device *dev)
if (err)
goto phy_off;
- netif_tx_start_all_queues(dev);
+ netif_tx_start_all_queues(netdev);
if (bp->ptp_info)
- bp->ptp_info->ptp_init(dev);
+ bp->ptp_info->ptp_init(netdev);
return 0;
@@ -3235,19 +3236,19 @@ static int macb_open(struct net_device *dev)
return err;
}
-static int macb_close(struct net_device *dev)
+static int macb_close(struct net_device *netdev)
{
- struct macb *bp = netdev_priv(dev);
+ struct macb *bp = netdev_priv(netdev);
struct macb_queue *queue;
unsigned long flags;
unsigned int q;
- netif_tx_stop_all_queues(dev);
+ netif_tx_stop_all_queues(netdev);
for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) {
napi_disable(&queue->napi_rx);
napi_disable(&queue->napi_tx);
- netdev_tx_reset_queue(netdev_get_tx_queue(dev, q));
+ netdev_tx_reset_queue(netdev_get_tx_queue(netdev, q));
}
cancel_delayed_work_sync(&bp->tx_lpi_work);
@@ -3259,38 +3260,38 @@ static int macb_close(struct net_device *dev)
spin_lock_irqsave(&bp->lock, flags);
macb_reset_hw(bp);
- netif_carrier_off(dev);
+ netif_carrier_off(netdev);
spin_unlock_irqrestore(&bp->lock, flags);
macb_free_consistent(bp);
if (bp->ptp_info)
- bp->ptp_info->ptp_remove(dev);
+ bp->ptp_info->ptp_remove(netdev);
pm_runtime_put(&bp->pdev->dev);
return 0;
}
-static int macb_change_mtu(struct net_device *dev, int new_mtu)
+static int macb_change_mtu(struct net_device *netdev, int new_mtu)
{
- if (netif_running(dev))
+ if (netif_running(netdev))
return -EBUSY;
- WRITE_ONCE(dev->mtu, new_mtu);
+ WRITE_ONCE(netdev->mtu, new_mtu);
return 0;
}
-static int macb_set_mac_addr(struct net_device *dev, void *addr)
+static int macb_set_mac_addr(struct net_device *netdev, void *addr)
{
int err;
- err = eth_mac_addr(dev, addr);
+ err = eth_mac_addr(netdev, addr);
if (err < 0)
return err;
- macb_set_hwaddr(netdev_priv(dev));
+ macb_set_hwaddr(netdev_priv(netdev));
return 0;
}
@@ -3328,7 +3329,7 @@ static void gem_get_stats(struct macb *bp, struct rtnl_link_stats64 *nstat)
struct gem_stats *hwstat = &bp->hw_stats.gem;
spin_lock_irq(&bp->stats_lock);
- if (netif_running(bp->dev))
+ if (netif_running(bp->netdev))
gem_update_stats(bp);
nstat->rx_errors = (hwstat->rx_frame_check_sequence_errors +
@@ -3361,10 +3362,10 @@ static void gem_get_stats(struct macb *bp, struct rtnl_link_stats64 *nstat)
spin_unlock_irq(&bp->stats_lock);
}
-static void gem_get_ethtool_stats(struct net_device *dev,
+static void gem_get_ethtool_stats(struct net_device *netdev,
struct ethtool_stats *stats, u64 *data)
{
- struct macb *bp = netdev_priv(dev);
+ struct macb *bp = netdev_priv(netdev);
spin_lock_irq(&bp->stats_lock);
gem_update_stats(bp);
@@ -3373,9 +3374,9 @@ static void gem_get_ethtool_stats(struct net_device *dev,
spin_unlock_irq(&bp->stats_lock);
}
-static int gem_get_sset_count(struct net_device *dev, int sset)
+static int gem_get_sset_count(struct net_device *netdev, int sset)
{
- struct macb *bp = netdev_priv(dev);
+ struct macb *bp = netdev_priv(netdev);
switch (sset) {
case ETH_SS_STATS:
@@ -3385,9 +3386,9 @@ static int gem_get_sset_count(struct net_device *dev, int sset)
}
}
-static void gem_get_ethtool_strings(struct net_device *dev, u32 sset, u8 *p)
+static void gem_get_ethtool_strings(struct net_device *netdev, u32 sset, u8 *p)
{
- struct macb *bp = netdev_priv(dev);
+ struct macb *bp = netdev_priv(netdev);
struct macb_queue *queue;
unsigned int i;
unsigned int q;
@@ -3406,13 +3407,13 @@ static void gem_get_ethtool_strings(struct net_device *dev, u32 sset, u8 *p)
}
}
-static void macb_get_stats(struct net_device *dev,
+static void macb_get_stats(struct net_device *netdev,
struct rtnl_link_stats64 *nstat)
{
- struct macb *bp = netdev_priv(dev);
+ struct macb *bp = netdev_priv(netdev);
struct macb_stats *hwstat = &bp->hw_stats.macb;
- netdev_stats_to_stats64(nstat, &bp->dev->stats);
+ netdev_stats_to_stats64(nstat, &bp->netdev->stats);
if (macb_is_gem(bp)) {
gem_get_stats(bp, nstat);
return;
@@ -3456,10 +3457,10 @@ static void macb_get_stats(struct net_device *dev,
spin_unlock_irq(&bp->stats_lock);
}
-static void macb_get_pause_stats(struct net_device *dev,
+static void macb_get_pause_stats(struct net_device *netdev,
struct ethtool_pause_stats *pause_stats)
{
- struct macb *bp = netdev_priv(dev);
+ struct macb *bp = netdev_priv(netdev);
struct macb_stats *hwstat = &bp->hw_stats.macb;
spin_lock_irq(&bp->stats_lock);
@@ -3469,10 +3470,10 @@ static void macb_get_pause_stats(struct net_device *dev,
spin_unlock_irq(&bp->stats_lock);
}
-static void gem_get_pause_stats(struct net_device *dev,
+static void gem_get_pause_stats(struct net_device *netdev,
struct ethtool_pause_stats *pause_stats)
{
- struct macb *bp = netdev_priv(dev);
+ struct macb *bp = netdev_priv(netdev);
struct gem_stats *hwstat = &bp->hw_stats.gem;
spin_lock_irq(&bp->stats_lock);
@@ -3482,10 +3483,10 @@ static void gem_get_pause_stats(struct net_device *dev,
spin_unlock_irq(&bp->stats_lock);
}
-static void macb_get_eth_mac_stats(struct net_device *dev,
+static void macb_get_eth_mac_stats(struct net_device *netdev,
struct ethtool_eth_mac_stats *mac_stats)
{
- struct macb *bp = netdev_priv(dev);
+ struct macb *bp = netdev_priv(netdev);
struct macb_stats *hwstat = &bp->hw_stats.macb;
spin_lock_irq(&bp->stats_lock);
@@ -3507,10 +3508,10 @@ static void macb_get_eth_mac_stats(struct net_device *dev,
spin_unlock_irq(&bp->stats_lock);
}
-static void gem_get_eth_mac_stats(struct net_device *dev,
+static void gem_get_eth_mac_stats(struct net_device *netdev,
struct ethtool_eth_mac_stats *mac_stats)
{
- struct macb *bp = netdev_priv(dev);
+ struct macb *bp = netdev_priv(netdev);
struct gem_stats *hwstat = &bp->hw_stats.gem;
spin_lock_irq(&bp->stats_lock);
@@ -3540,10 +3541,10 @@ static void gem_get_eth_mac_stats(struct net_device *dev,
}
/* TODO: Report SQE test errors when added to phy_stats */
-static void macb_get_eth_phy_stats(struct net_device *dev,
+static void macb_get_eth_phy_stats(struct net_device *netdev,
struct ethtool_eth_phy_stats *phy_stats)
{
- struct macb *bp = netdev_priv(dev);
+ struct macb *bp = netdev_priv(netdev);
struct macb_stats *hwstat = &bp->hw_stats.macb;
spin_lock_irq(&bp->stats_lock);
@@ -3552,10 +3553,10 @@ static void macb_get_eth_phy_stats(struct net_device *dev,
spin_unlock_irq(&bp->stats_lock);
}
-static void gem_get_eth_phy_stats(struct net_device *dev,
+static void gem_get_eth_phy_stats(struct net_device *netdev,
struct ethtool_eth_phy_stats *phy_stats)
{
- struct macb *bp = netdev_priv(dev);
+ struct macb *bp = netdev_priv(netdev);
struct gem_stats *hwstat = &bp->hw_stats.gem;
spin_lock_irq(&bp->stats_lock);
@@ -3564,11 +3565,11 @@ static void gem_get_eth_phy_stats(struct net_device *dev,
spin_unlock_irq(&bp->stats_lock);
}
-static void macb_get_rmon_stats(struct net_device *dev,
+static void macb_get_rmon_stats(struct net_device *netdev,
struct ethtool_rmon_stats *rmon_stats,
const struct ethtool_rmon_hist_range **ranges)
{
- struct macb *bp = netdev_priv(dev);
+ struct macb *bp = netdev_priv(netdev);
struct macb_stats *hwstat = &bp->hw_stats.macb;
spin_lock_irq(&bp->stats_lock);
@@ -3590,11 +3591,11 @@ static const struct ethtool_rmon_hist_range gem_rmon_ranges[] = {
{ },
};
-static void gem_get_rmon_stats(struct net_device *dev,
+static void gem_get_rmon_stats(struct net_device *netdev,
struct ethtool_rmon_stats *rmon_stats,
const struct ethtool_rmon_hist_range **ranges)
{
- struct macb *bp = netdev_priv(dev);
+ struct macb *bp = netdev_priv(netdev);
struct gem_stats *hwstat = &bp->hw_stats.gem;
spin_lock_irq(&bp->stats_lock);
@@ -3625,10 +3626,10 @@ static int macb_get_regs_len(struct net_device *netdev)
return MACB_GREGS_NBR * sizeof(u32);
}
-static void macb_get_regs(struct net_device *dev, struct ethtool_regs *regs,
+static void macb_get_regs(struct net_device *netdev, struct ethtool_regs *regs,
void *p)
{
- struct macb *bp = netdev_priv(dev);
+ struct macb *bp = netdev_priv(netdev);
unsigned int tail, head;
u32 *regs_buff = p;
@@ -3745,16 +3746,16 @@ static int macb_set_ringparam(struct net_device *netdev,
return 0;
}
- if (netif_running(bp->dev)) {
+ if (netif_running(bp->netdev)) {
reset = 1;
- macb_close(bp->dev);
+ macb_close(bp->netdev);
}
bp->rx_ring_size = new_rx_size;
bp->tx_ring_size = new_tx_size;
if (reset)
- macb_open(bp->dev);
+ macb_open(bp->netdev);
return 0;
}
@@ -3781,13 +3782,13 @@ static s32 gem_get_ptp_max_adj(void)
return 64000000;
}
-static int gem_get_ts_info(struct net_device *dev,
+static int gem_get_ts_info(struct net_device *netdev,
struct kernel_ethtool_ts_info *info)
{
- struct macb *bp = netdev_priv(dev);
+ struct macb *bp = netdev_priv(netdev);
if (!macb_dma_ptp(bp)) {
- ethtool_op_get_ts_info(dev, info);
+ ethtool_op_get_ts_info(netdev, info);
return 0;
}
@@ -3834,7 +3835,7 @@ static int macb_get_ts_info(struct net_device *netdev,
static void gem_enable_flow_filters(struct macb *bp, bool enable)
{
- struct net_device *netdev = bp->dev;
+ struct net_device *netdev = bp->netdev;
struct ethtool_rx_fs_item *item;
u32 t2_scr;
int num_t2_scr;
@@ -4164,16 +4165,16 @@ static const struct ethtool_ops macb_ethtool_ops = {
.set_ringparam = macb_set_ringparam,
};
-static int macb_get_eee(struct net_device *dev, struct ethtool_keee *eee)
+static int macb_get_eee(struct net_device *netdev, struct ethtool_keee *eee)
{
- struct macb *bp = netdev_priv(dev);
+ struct macb *bp = netdev_priv(netdev);
return phylink_ethtool_get_eee(bp->phylink, eee);
}
-static int macb_set_eee(struct net_device *dev, struct ethtool_keee *eee)
+static int macb_set_eee(struct net_device *netdev, struct ethtool_keee *eee)
{
- struct macb *bp = netdev_priv(dev);
+ struct macb *bp = netdev_priv(netdev);
return phylink_ethtool_set_eee(bp->phylink, eee);
}
@@ -4204,43 +4205,43 @@ static const struct ethtool_ops gem_ethtool_ops = {
.set_eee = macb_set_eee,
};
-static int macb_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
+static int macb_ioctl(struct net_device *netdev, struct ifreq *rq, int cmd)
{
- struct macb *bp = netdev_priv(dev);
+ struct macb *bp = netdev_priv(netdev);
- if (!netif_running(dev))
+ if (!netif_running(netdev))
return -EINVAL;
return phylink_mii_ioctl(bp->phylink, rq, cmd);
}
-static int macb_hwtstamp_get(struct net_device *dev,
+static int macb_hwtstamp_get(struct net_device *netdev,
struct kernel_hwtstamp_config *cfg)
{
- struct macb *bp = netdev_priv(dev);
+ struct macb *bp = netdev_priv(netdev);
- if (!netif_running(dev))
+ if (!netif_running(netdev))
return -EINVAL;
if (!bp->ptp_info)
return -EOPNOTSUPP;
- return bp->ptp_info->get_hwtst(dev, cfg);
+ return bp->ptp_info->get_hwtst(netdev, cfg);
}
-static int macb_hwtstamp_set(struct net_device *dev,
+static int macb_hwtstamp_set(struct net_device *netdev,
struct kernel_hwtstamp_config *cfg,
struct netlink_ext_ack *extack)
{
- struct macb *bp = netdev_priv(dev);
+ struct macb *bp = netdev_priv(netdev);
- if (!netif_running(dev))
+ if (!netif_running(netdev))
return -EINVAL;
if (!bp->ptp_info)
return -EOPNOTSUPP;
- return bp->ptp_info->set_hwtst(dev, cfg, extack);
+ return bp->ptp_info->set_hwtst(netdev, cfg, extack);
}
static inline void macb_set_txcsum_feature(struct macb *bp,
@@ -4263,7 +4264,7 @@ static inline void macb_set_txcsum_feature(struct macb *bp,
static inline void macb_set_rxcsum_feature(struct macb *bp,
netdev_features_t features)
{
- struct net_device *netdev = bp->dev;
+ struct net_device *netdev = bp->netdev;
u32 val;
if (!macb_is_gem(bp))
@@ -4310,7 +4311,7 @@ static int macb_set_features(struct net_device *netdev,
static void macb_restore_features(struct macb *bp)
{
- struct net_device *netdev = bp->dev;
+ struct net_device *netdev = bp->netdev;
netdev_features_t features = netdev->features;
struct ethtool_rx_fs_item *item;
@@ -4327,14 +4328,14 @@ static void macb_restore_features(struct macb *bp)
macb_set_rxflow_feature(bp, features);
}
-static int macb_taprio_setup_replace(struct net_device *ndev,
+static int macb_taprio_setup_replace(struct net_device *netdev,
struct tc_taprio_qopt_offload *conf)
{
u64 total_on_time = 0, start_time_sec = 0, start_time = conf->base_time;
u32 configured_queues = 0, speed = 0, start_time_nsec;
struct macb_queue_enst_config *enst_queue;
struct tc_taprio_sched_entry *entry;
- struct macb *bp = netdev_priv(ndev);
+ struct macb *bp = netdev_priv(netdev);
struct ethtool_link_ksettings kset;
struct macb_queue *queue;
u32 queue_mask;
@@ -4343,13 +4344,13 @@ static int macb_taprio_setup_replace(struct net_device *ndev,
int err;
if (conf->num_entries > bp->num_queues) {
- netdev_err(ndev, "Too many TAPRIO entries: %zu > %d queues\n",
+ netdev_err(netdev, "Too many TAPRIO entries: %zu > %d queues\n",
conf->num_entries, bp->num_queues);
return -EINVAL;
}
if (conf->base_time < 0) {
- netdev_err(ndev, "Invalid base_time: must be 0 or positive, got %lld\n",
+ netdev_err(netdev, "Invalid base_time: must be 0 or positive, got %lld\n",
conf->base_time);
return -ERANGE;
}
@@ -4357,13 +4358,13 @@ static int macb_taprio_setup_replace(struct net_device *ndev,
/* Get the current link speed */
err = phylink_ethtool_ksettings_get(bp->phylink, &kset);
if (unlikely(err)) {
- netdev_err(ndev, "Failed to get link settings: %d\n", err);
+ netdev_err(netdev, "Failed to get link settings: %d\n", err);
return err;
}
speed = kset.base.speed;
if (unlikely(speed <= 0)) {
- netdev_err(ndev, "Invalid speed: %d\n", speed);
+ netdev_err(netdev, "Invalid speed: %d\n", speed);
return -EINVAL;
}
@@ -4376,7 +4377,7 @@ static int macb_taprio_setup_replace(struct net_device *ndev,
entry = &conf->entries[i];
if (entry->command != TC_TAPRIO_CMD_SET_GATES) {
- netdev_err(ndev, "Entry %zu: unsupported command %d\n",
+ netdev_err(netdev, "Entry %zu: unsupported command %d\n",
i, entry->command);
err = -EOPNOTSUPP;
goto cleanup;
@@ -4384,7 +4385,7 @@ static int macb_taprio_setup_replace(struct net_device *ndev,
/* Validate gate_mask: must be nonzero, single queue, and within range */
if (!is_power_of_2(entry->gate_mask)) {
- netdev_err(ndev, "Entry %zu: gate_mask 0x%x is not a power of 2 (only one queue per entry allowed)\n",
+ netdev_err(netdev, "Entry %zu: gate_mask 0x%x is not a power of 2 (only one queue per entry allowed)\n",
i, entry->gate_mask);
err = -EINVAL;
goto cleanup;
@@ -4393,7 +4394,7 @@ static int macb_taprio_setup_replace(struct net_device *ndev,
/* gate_mask must not select queues outside the valid queues */
queue_id = order_base_2(entry->gate_mask);
if (queue_id >= bp->num_queues) {
- netdev_err(ndev, "Entry %zu: gate_mask 0x%x exceeds queue range (max_queues=%d)\n",
+ netdev_err(netdev, "Entry %zu: gate_mask 0x%x exceeds queue range (max_queues=%d)\n",
i, entry->gate_mask, bp->num_queues);
err = -EINVAL;
goto cleanup;
@@ -4403,7 +4404,7 @@ static int macb_taprio_setup_replace(struct net_device *ndev,
start_time_sec = start_time;
start_time_nsec = do_div(start_time_sec, NSEC_PER_SEC);
if (start_time_sec > GENMASK(GEM_START_TIME_SEC_SIZE - 1, 0)) {
- netdev_err(ndev, "Entry %zu: Start time %llu s exceeds hardware limit\n",
+ netdev_err(netdev, "Entry %zu: Start time %llu s exceeds hardware limit\n",
i, start_time_sec);
err = -ERANGE;
goto cleanup;
@@ -4411,7 +4412,7 @@ static int macb_taprio_setup_replace(struct net_device *ndev,
/* Check for on time limit */
if (entry->interval > enst_max_hw_interval(speed)) {
- netdev_err(ndev, "Entry %zu: interval %u ns exceeds hardware limit %llu ns\n",
+ netdev_err(netdev, "Entry %zu: interval %u ns exceeds hardware limit %llu ns\n",
i, entry->interval, enst_max_hw_interval(speed));
err = -ERANGE;
goto cleanup;
@@ -4419,7 +4420,7 @@ static int macb_taprio_setup_replace(struct net_device *ndev,
/* Check for off time limit*/
if ((conf->cycle_time - entry->interval) > enst_max_hw_interval(speed)) {
- netdev_err(ndev, "Entry %zu: off_time %llu ns exceeds hardware limit %llu ns\n",
+ netdev_err(netdev, "Entry %zu: off_time %llu ns exceeds hardware limit %llu ns\n",
i, conf->cycle_time - entry->interval,
enst_max_hw_interval(speed));
err = -ERANGE;
@@ -4442,13 +4443,13 @@ static int macb_taprio_setup_replace(struct net_device *ndev,
/* Check total interval doesn't exceed cycle time */
if (total_on_time > conf->cycle_time) {
- netdev_err(ndev, "Total ON %llu ns exceeds cycle time %llu ns\n",
+ netdev_err(netdev, "Total ON %llu ns exceeds cycle time %llu ns\n",
total_on_time, conf->cycle_time);
err = -EINVAL;
goto cleanup;
}
- netdev_dbg(ndev, "TAPRIO setup: %zu entries, base_time=%lld ns, cycle_time=%llu ns\n",
+ netdev_dbg(netdev, "TAPRIO setup: %zu entries, base_time=%lld ns, cycle_time=%llu ns\n",
conf->num_entries, conf->base_time, conf->cycle_time);
/* All validations passed - proceed with hardware configuration */
@@ -4473,7 +4474,7 @@ static int macb_taprio_setup_replace(struct net_device *ndev,
gem_writel(bp, ENST_CONTROL, configured_queues);
}
- netdev_info(ndev, "TAPRIO configuration completed successfully: %zu entries, %d queues configured\n",
+ netdev_info(netdev, "TAPRIO configuration completed successfully: %zu entries, %d queues configured\n",
conf->num_entries, hweight32(configured_queues));
cleanup:
@@ -4481,14 +4482,14 @@ static int macb_taprio_setup_replace(struct net_device *ndev,
return err;
}
-static void macb_taprio_destroy(struct net_device *ndev)
+static void macb_taprio_destroy(struct net_device *netdev)
{
- struct macb *bp = netdev_priv(ndev);
+ struct macb *bp = netdev_priv(netdev);
struct macb_queue *queue;
u32 queue_mask;
unsigned int q;
- netdev_reset_tc(ndev);
+ netdev_reset_tc(netdev);
queue_mask = BIT_U32(bp->num_queues) - 1;
scoped_guard(spinlock_irqsave, &bp->lock) {
@@ -4503,30 +4504,30 @@ static void macb_taprio_destroy(struct net_device *ndev)
queue_writel(queue, ENST_OFF_TIME, 0);
}
}
- netdev_info(ndev, "TAPRIO destroy: All gates disabled\n");
+ netdev_info(netdev, "TAPRIO destroy: All gates disabled\n");
}
-static int macb_setup_taprio(struct net_device *ndev,
+static int macb_setup_taprio(struct net_device *netdev,
struct tc_taprio_qopt_offload *taprio)
{
- struct macb *bp = netdev_priv(ndev);
+ struct macb *bp = netdev_priv(netdev);
int err = 0;
- if (unlikely(!(ndev->hw_features & NETIF_F_HW_TC)))
+ if (unlikely(!(netdev->hw_features & NETIF_F_HW_TC)))
return -EOPNOTSUPP;
/* Check if Device is in runtime suspend */
if (unlikely(pm_runtime_suspended(&bp->pdev->dev))) {
- netdev_err(ndev, "Device is in runtime suspend\n");
+ netdev_err(netdev, "Device is in runtime suspend\n");
return -EOPNOTSUPP;
}
switch (taprio->cmd) {
case TAPRIO_CMD_REPLACE:
- err = macb_taprio_setup_replace(ndev, taprio);
+ err = macb_taprio_setup_replace(netdev, taprio);
break;
case TAPRIO_CMD_DESTROY:
- macb_taprio_destroy(ndev);
+ macb_taprio_destroy(netdev);
break;
default:
err = -EOPNOTSUPP;
@@ -4535,15 +4536,15 @@ static int macb_setup_taprio(struct net_device *ndev,
return err;
}
-static int macb_setup_tc(struct net_device *dev, enum tc_setup_type type,
+static int macb_setup_tc(struct net_device *netdev, enum tc_setup_type type,
void *type_data)
{
- if (!dev || !type_data)
+ if (!netdev || !type_data)
return -EINVAL;
switch (type) {
case TC_SETUP_QDISC_TAPRIO:
- return macb_setup_taprio(dev, type_data);
+ return macb_setup_taprio(netdev, type_data);
default:
return -EOPNOTSUPP;
}
@@ -4751,9 +4752,9 @@ static int macb_clk_init(struct platform_device *pdev, struct clk **pclk,
static int macb_init_dflt(struct platform_device *pdev)
{
- struct net_device *dev = platform_get_drvdata(pdev);
+ struct net_device *netdev = platform_get_drvdata(pdev);
unsigned int hw_q, q;
- struct macb *bp = netdev_priv(dev);
+ struct macb *bp = netdev_priv(netdev);
struct macb_queue *queue;
int err;
u32 val, reg;
@@ -4769,8 +4770,8 @@ static int macb_init_dflt(struct platform_device *pdev)
queue = &bp->queues[q];
queue->bp = bp;
spin_lock_init(&queue->tx_ptr_lock);
- netif_napi_add(dev, &queue->napi_rx, macb_rx_poll);
- netif_napi_add(dev, &queue->napi_tx, macb_tx_poll);
+ netif_napi_add(netdev, &queue->napi_rx, macb_rx_poll);
+ netif_napi_add(netdev, &queue->napi_tx, macb_tx_poll);
if (hw_q) {
queue->ISR = GEM_ISR(hw_q - 1);
queue->IER = GEM_IER(hw_q - 1);
@@ -4800,7 +4801,7 @@ static int macb_init_dflt(struct platform_device *pdev)
*/
queue->irq = platform_get_irq(pdev, q);
err = devm_request_irq(&pdev->dev, queue->irq, macb_interrupt,
- IRQF_SHARED, dev->name, queue);
+ IRQF_SHARED, netdev->name, queue);
if (err) {
dev_err(&pdev->dev,
"Unable to request IRQ %d (error %d)\n",
@@ -4812,7 +4813,7 @@ static int macb_init_dflt(struct platform_device *pdev)
q++;
}
- dev->netdev_ops = &macb_netdev_ops;
+ netdev->netdev_ops = &macb_netdev_ops;
/* setup appropriated routines according to adapter type */
if (macb_is_gem(bp)) {
@@ -4820,39 +4821,39 @@ static int macb_init_dflt(struct platform_device *pdev)
bp->macbgem_ops.mog_free_rx_buffers = gem_free_rx_buffers;
bp->macbgem_ops.mog_init_rings = gem_init_rings;
bp->macbgem_ops.mog_rx = gem_rx;
- dev->ethtool_ops = &gem_ethtool_ops;
+ netdev->ethtool_ops = &gem_ethtool_ops;
} else {
bp->macbgem_ops.mog_alloc_rx_buffers = macb_alloc_rx_buffers;
bp->macbgem_ops.mog_free_rx_buffers = macb_free_rx_buffers;
bp->macbgem_ops.mog_init_rings = macb_init_rings;
bp->macbgem_ops.mog_rx = macb_rx;
- dev->ethtool_ops = &macb_ethtool_ops;
+ netdev->ethtool_ops = &macb_ethtool_ops;
}
- netdev_sw_irq_coalesce_default_on(dev);
+ netdev_sw_irq_coalesce_default_on(netdev);
- dev->priv_flags |= IFF_LIVE_ADDR_CHANGE;
+ netdev->priv_flags |= IFF_LIVE_ADDR_CHANGE;
/* Set features */
- dev->hw_features = NETIF_F_SG;
+ netdev->hw_features = NETIF_F_SG;
/* Check LSO capability; runtime detection can be overridden by a cap
* flag if the hardware is known to be buggy
*/
if (!(bp->caps & MACB_CAPS_NO_LSO) &&
GEM_BFEXT(PBUF_LSO, gem_readl(bp, DCFG6)))
- dev->hw_features |= MACB_NETIF_LSO;
+ netdev->hw_features |= MACB_NETIF_LSO;
/* Checksum offload is only available on gem with packet buffer */
if (macb_is_gem(bp) && !(bp->caps & MACB_CAPS_FIFO_MODE))
- dev->hw_features |= NETIF_F_HW_CSUM | NETIF_F_RXCSUM;
+ netdev->hw_features |= NETIF_F_HW_CSUM | NETIF_F_RXCSUM;
if (bp->caps & MACB_CAPS_SG_DISABLED)
- dev->hw_features &= ~NETIF_F_SG;
+ netdev->hw_features &= ~NETIF_F_SG;
/* Enable HW_TC if hardware supports QBV */
if (bp->caps & MACB_CAPS_QBV)
- dev->hw_features |= NETIF_F_HW_TC;
+ netdev->hw_features |= NETIF_F_HW_TC;
- dev->features = dev->hw_features;
+ netdev->features = netdev->hw_features;
/* Check RX Flow Filters support.
* Max Rx flows set by availability of screeners & compare regs:
@@ -4870,7 +4871,7 @@ static int macb_init_dflt(struct platform_device *pdev)
reg = GEM_BFINS(ETHTCMP, (uint16_t)ETH_P_IP, reg);
gem_writel_n(bp, ETHT, SCRT2_ETHT, reg);
/* Filtering is supported in hw but don't enable it in kernel now */
- dev->hw_features |= NETIF_F_NTUPLE;
+ netdev->hw_features |= NETIF_F_NTUPLE;
/* init Rx flow definitions */
bp->rx_fs_list.count = 0;
spin_lock_init(&bp->rx_fs_lock);
@@ -5073,9 +5074,9 @@ static void at91ether_stop(struct macb *lp)
}
/* Open the ethernet interface */
-static int at91ether_open(struct net_device *dev)
+static int at91ether_open(struct net_device *netdev)
{
- struct macb *lp = netdev_priv(dev);
+ struct macb *lp = netdev_priv(netdev);
u32 ctl;
int ret;
@@ -5097,7 +5098,7 @@ static int at91ether_open(struct net_device *dev)
if (ret)
goto stop;
- netif_start_queue(dev);
+ netif_start_queue(netdev);
return 0;
@@ -5109,11 +5110,11 @@ static int at91ether_open(struct net_device *dev)
}
/* Close the interface */
-static int at91ether_close(struct net_device *dev)
+static int at91ether_close(struct net_device *netdev)
{
- struct macb *lp = netdev_priv(dev);
+ struct macb *lp = netdev_priv(netdev);
- netif_stop_queue(dev);
+ netif_stop_queue(netdev);
phylink_stop(lp->phylink);
phylink_disconnect_phy(lp->phylink);
@@ -5127,14 +5128,14 @@ static int at91ether_close(struct net_device *dev)
/* Transmit packet */
static netdev_tx_t at91ether_start_xmit(struct sk_buff *skb,
- struct net_device *dev)
+ struct net_device *netdev)
{
- struct macb *lp = netdev_priv(dev);
+ struct macb *lp = netdev_priv(netdev);
if (macb_readl(lp, TSR) & MACB_BIT(RM9200_BNQ)) {
int desc = 0;
- netif_stop_queue(dev);
+ netif_stop_queue(netdev);
/* Store packet information (to free when Tx completed) */
lp->rm9200_txq[desc].skb = skb;
@@ -5143,8 +5144,8 @@ static netdev_tx_t at91ether_start_xmit(struct sk_buff *skb,
skb->len, DMA_TO_DEVICE);
if (dma_mapping_error(&lp->pdev->dev, lp->rm9200_txq[desc].mapping)) {
dev_kfree_skb_any(skb);
- dev->stats.tx_dropped++;
- netdev_err(dev, "%s: DMA mapping error\n", __func__);
+ netdev->stats.tx_dropped++;
+ netdev_err(netdev, "%s: DMA mapping error\n", __func__);
return NETDEV_TX_OK;
}
@@ -5154,7 +5155,8 @@ static netdev_tx_t at91ether_start_xmit(struct sk_buff *skb,
macb_writel(lp, TCR, skb->len);
} else {
- netdev_err(dev, "%s called, but device is busy!\n", __func__);
+ netdev_err(netdev, "%s called, but device is busy!\n",
+ __func__);
return NETDEV_TX_BUSY;
}
@@ -5164,9 +5166,9 @@ static netdev_tx_t at91ether_start_xmit(struct sk_buff *skb,
/* Extract received frame from buffer descriptors and sent to upper layers.
* (Called from interrupt context)
*/
-static void at91ether_rx(struct net_device *dev)
+static void at91ether_rx(struct net_device *netdev)
{
- struct macb *lp = netdev_priv(dev);
+ struct macb *lp = netdev_priv(netdev);
struct macb_queue *q = &lp->queues[0];
struct macb_dma_desc *desc;
unsigned char *p_recv;
@@ -5177,21 +5179,21 @@ static void at91ether_rx(struct net_device *dev)
while (desc->addr & MACB_BIT(RX_USED)) {
p_recv = q->rx_buffers + q->rx_tail * AT91ETHER_MAX_RBUFF_SZ;
pktlen = MACB_BF(RX_FRMLEN, desc->ctrl);
- skb = netdev_alloc_skb(dev, pktlen + 2);
+ skb = netdev_alloc_skb(netdev, pktlen + 2);
if (skb) {
skb_reserve(skb, 2);
skb_put_data(skb, p_recv, pktlen);
- skb->protocol = eth_type_trans(skb, dev);
- dev->stats.rx_packets++;
- dev->stats.rx_bytes += pktlen;
+ skb->protocol = eth_type_trans(skb, netdev);
+ netdev->stats.rx_packets++;
+ netdev->stats.rx_bytes += pktlen;
netif_rx(skb);
} else {
- dev->stats.rx_dropped++;
+ netdev->stats.rx_dropped++;
}
if (desc->ctrl & MACB_BIT(RX_MHASH_MATCH))
- dev->stats.multicast++;
+ netdev->stats.multicast++;
/* reset ownership bit */
desc->addr &= ~MACB_BIT(RX_USED);
@@ -5209,8 +5211,8 @@ static void at91ether_rx(struct net_device *dev)
/* MAC interrupt handler */
static irqreturn_t at91ether_interrupt(int irq, void *dev_id)
{
- struct net_device *dev = dev_id;
- struct macb *lp = netdev_priv(dev);
+ struct net_device *netdev = dev_id;
+ struct macb *lp = netdev_priv(netdev);
u32 intstatus, ctl;
unsigned int desc;
@@ -5221,13 +5223,13 @@ static irqreturn_t at91ether_interrupt(int irq, void *dev_id)
/* Receive complete */
if (intstatus & MACB_BIT(RCOMP))
- at91ether_rx(dev);
+ at91ether_rx(netdev);
/* Transmit complete */
if (intstatus & MACB_BIT(TCOMP)) {
/* The TCOM bit is set even if the transmission failed */
if (intstatus & (MACB_BIT(ISR_TUND) | MACB_BIT(ISR_RLE)))
- dev->stats.tx_errors++;
+ netdev->stats.tx_errors++;
desc = 0;
if (lp->rm9200_txq[desc].skb) {
@@ -5235,10 +5237,10 @@ static irqreturn_t at91ether_interrupt(int irq, void *dev_id)
lp->rm9200_txq[desc].skb = NULL;
dma_unmap_single(&lp->pdev->dev, lp->rm9200_txq[desc].mapping,
lp->rm9200_txq[desc].size, DMA_TO_DEVICE);
- dev->stats.tx_packets++;
- dev->stats.tx_bytes += lp->rm9200_txq[desc].size;
+ netdev->stats.tx_packets++;
+ netdev->stats.tx_bytes += lp->rm9200_txq[desc].size;
}
- netif_wake_queue(dev);
+ netif_wake_queue(netdev);
}
/* Work-around for EMAC Errata section 41.3.1 */
@@ -5250,18 +5252,18 @@ static irqreturn_t at91ether_interrupt(int irq, void *dev_id)
}
if (intstatus & MACB_BIT(ISR_ROVR))
- netdev_err(dev, "ROVR error\n");
+ netdev_err(netdev, "ROVR error\n");
return IRQ_HANDLED;
}
#ifdef CONFIG_NET_POLL_CONTROLLER
-static void at91ether_poll_controller(struct net_device *dev)
+static void at91ether_poll_controller(struct net_device *netdev)
{
unsigned long flags;
local_irq_save(flags);
- at91ether_interrupt(dev->irq, dev);
+ at91ether_interrupt(netdev->irq, netdev);
local_irq_restore(flags);
}
#endif
@@ -5308,17 +5310,17 @@ static int at91ether_clk_init(struct platform_device *pdev, struct clk **pclk,
static int at91ether_init(struct platform_device *pdev)
{
- struct net_device *dev = platform_get_drvdata(pdev);
- struct macb *bp = netdev_priv(dev);
+ struct net_device *netdev = platform_get_drvdata(pdev);
+ struct macb *bp = netdev_priv(netdev);
int err;
bp->queues[0].bp = bp;
- dev->netdev_ops = &at91ether_netdev_ops;
- dev->ethtool_ops = &macb_ethtool_ops;
+ netdev->netdev_ops = &at91ether_netdev_ops;
+ netdev->ethtool_ops = &macb_ethtool_ops;
- err = devm_request_irq(&pdev->dev, dev->irq, at91ether_interrupt,
- 0, dev->name, dev);
+ err = devm_request_irq(&pdev->dev, netdev->irq, at91ether_interrupt,
+ 0, netdev->name, netdev);
if (err)
return err;
@@ -5447,8 +5449,8 @@ static int fu540_c000_init(struct platform_device *pdev)
static int init_reset_optional(struct platform_device *pdev)
{
- struct net_device *dev = platform_get_drvdata(pdev);
- struct macb *bp = netdev_priv(dev);
+ struct net_device *netdev = platform_get_drvdata(pdev);
+ struct macb *bp = netdev_priv(netdev);
int ret;
if (bp->phy_interface == PHY_INTERFACE_MODE_SGMII) {
@@ -5763,7 +5765,7 @@ static int macb_probe(struct platform_device *pdev)
const struct macb_config *macb_config;
struct clk *tsu_clk = NULL;
phy_interface_t interface;
- struct net_device *dev;
+ struct net_device *netdev;
struct resource *regs;
u32 wtrmrk_rst_val;
void __iomem *mem;
@@ -5798,19 +5800,19 @@ static int macb_probe(struct platform_device *pdev)
goto err_disable_clocks;
}
- dev = alloc_etherdev_mq(sizeof(*bp), num_queues);
- if (!dev) {
+ netdev = alloc_etherdev_mq(sizeof(*bp), num_queues);
+ if (!netdev) {
err = -ENOMEM;
goto err_disable_clocks;
}
- dev->base_addr = regs->start;
+ netdev->base_addr = regs->start;
- SET_NETDEV_DEV(dev, &pdev->dev);
+ SET_NETDEV_DEV(netdev, &pdev->dev);
- bp = netdev_priv(dev);
+ bp = netdev_priv(netdev);
bp->pdev = pdev;
- bp->dev = dev;
+ bp->netdev = netdev;
bp->regs = mem;
bp->native_io = native_io;
if (native_io) {
@@ -5883,21 +5885,21 @@ static int macb_probe(struct platform_device *pdev)
bp->caps |= MACB_CAPS_DMA_64B;
}
#endif
- platform_set_drvdata(pdev, dev);
+ platform_set_drvdata(pdev, netdev);
- dev->irq = platform_get_irq(pdev, 0);
- if (dev->irq < 0) {
- err = dev->irq;
+ netdev->irq = platform_get_irq(pdev, 0);
+ if (netdev->irq < 0) {
+ err = netdev->irq;
goto err_out_free_netdev;
}
/* MTU range: 68 - 1518 or 10240 */
- dev->min_mtu = GEM_MTU_MIN_SIZE;
+ netdev->min_mtu = GEM_MTU_MIN_SIZE;
if ((bp->caps & MACB_CAPS_JUMBO) && bp->jumbo_max_len)
- dev->max_mtu = MIN(bp->jumbo_max_len, RX_BUFFER_MAX) -
+ netdev->max_mtu = MIN(bp->jumbo_max_len, RX_BUFFER_MAX) -
ETH_HLEN - ETH_FCS_LEN;
else
- dev->max_mtu = 1536 - ETH_HLEN - ETH_FCS_LEN;
+ netdev->max_mtu = 1536 - ETH_HLEN - ETH_FCS_LEN;
if (bp->caps & MACB_CAPS_BD_RD_PREFETCH) {
val = GEM_BFEXT(RXBD_RDBUFF, gem_readl(bp, DCFG10));
@@ -5915,7 +5917,7 @@ static int macb_probe(struct platform_device *pdev)
if (bp->caps & MACB_CAPS_NEEDS_RSTONUBR)
bp->rx_intr_mask |= MACB_BIT(RXUBR);
- err = of_get_ethdev_address(np, bp->dev);
+ err = of_get_ethdev_address(np, bp->netdev);
if (err == -EPROBE_DEFER)
goto err_out_free_netdev;
else if (err)
@@ -5937,9 +5939,9 @@ static int macb_probe(struct platform_device *pdev)
if (err)
goto err_out_phy_exit;
- netif_carrier_off(dev);
+ netif_carrier_off(netdev);
- err = register_netdev(dev);
+ err = register_netdev(netdev);
if (err) {
dev_err(&pdev->dev, "Cannot register net device, aborting.\n");
goto err_out_unregister_mdio;
@@ -5948,9 +5950,9 @@ static int macb_probe(struct platform_device *pdev)
INIT_WORK(&bp->hresp_err_bh_work, macb_hresp_error_task);
INIT_DELAYED_WORK(&bp->tx_lpi_work, macb_tx_lpi_work_fn);
- netdev_info(dev, "Cadence %s rev 0x%08x at 0x%08lx irq %d (%pM)\n",
+ netdev_info(netdev, "Cadence %s rev 0x%08x at 0x%08lx irq %d (%pM)\n",
macb_is_gem(bp) ? "GEM" : "MACB", macb_readl(bp, MID),
- dev->base_addr, dev->irq, dev->dev_addr);
+ netdev->base_addr, netdev->irq, netdev->dev_addr);
pm_runtime_put_autosuspend(&bp->pdev->dev);
@@ -5964,7 +5966,7 @@ static int macb_probe(struct platform_device *pdev)
phy_exit(bp->phy);
err_out_free_netdev:
- free_netdev(dev);
+ free_netdev(netdev);
err_disable_clocks:
macb_clks_disable(pclk, hclk, tx_clk, rx_clk, tsu_clk);
@@ -5977,14 +5979,14 @@ static int macb_probe(struct platform_device *pdev)
static void macb_remove(struct platform_device *pdev)
{
- struct net_device *dev;
+ struct net_device *netdev;
struct macb *bp;
- dev = platform_get_drvdata(pdev);
+ netdev = platform_get_drvdata(pdev);
- if (dev) {
- bp = netdev_priv(dev);
- unregister_netdev(dev);
+ if (netdev) {
+ bp = netdev_priv(netdev);
+ unregister_netdev(netdev);
phy_exit(bp->phy);
mdiobus_unregister(bp->mii_bus);
mdiobus_free(bp->mii_bus);
@@ -5996,7 +5998,7 @@ static void macb_remove(struct platform_device *pdev)
pm_runtime_dont_use_autosuspend(&pdev->dev);
pm_runtime_set_suspended(&pdev->dev);
phylink_destroy(bp->phylink);
- free_netdev(dev);
+ free_netdev(netdev);
}
}
@@ -6012,7 +6014,7 @@ static int __maybe_unused macb_suspend(struct device *dev)
unsigned int q;
int err;
- if (!device_may_wakeup(&bp->dev->dev))
+ if (!device_may_wakeup(&bp->netdev->dev))
phy_exit(bp->phy);
if (!netif_running(netdev))
@@ -6022,7 +6024,7 @@ static int __maybe_unused macb_suspend(struct device *dev)
if (bp->wolopts & WAKE_ARP) {
/* Check for IP address in WOL ARP mode */
rcu_read_lock();
- idev = __in_dev_get_rcu(bp->dev);
+ idev = __in_dev_get_rcu(bp->netdev);
if (idev)
ifa = rcu_dereference(idev->ifa_list);
if (!ifa) {
@@ -6150,7 +6152,7 @@ static int __maybe_unused macb_resume(struct device *dev)
unsigned int q;
int err;
- if (!device_may_wakeup(&bp->dev->dev))
+ if (!device_may_wakeup(&bp->netdev->dev))
phy_init(bp->phy);
if (!netif_running(netdev))
diff --git a/drivers/net/ethernet/cadence/macb_pci.c b/drivers/net/ethernet/cadence/macb_pci.c
index fc4f5aee6ab3..91108d4366f6 100644
--- a/drivers/net/ethernet/cadence/macb_pci.c
+++ b/drivers/net/ethernet/cadence/macb_pci.c
@@ -24,48 +24,48 @@
#define GEM_PCLK_RATE 50000000
#define GEM_HCLK_RATE 50000000
-static int macb_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+static int macb_probe(struct pci_dev *pci, const struct pci_device_id *id)
{
int err;
- struct platform_device *plat_dev;
+ struct platform_device *pdev;
struct platform_device_info plat_info;
struct macb_platform_data plat_data;
struct resource res[2];
/* enable pci device */
- err = pcim_enable_device(pdev);
+ err = pcim_enable_device(pci);
if (err < 0) {
- dev_err(&pdev->dev, "Enabling PCI device has failed: %d", err);
+ dev_err(&pci->dev, "Enabling PCI device has failed: %d", err);
return err;
}
- pci_set_master(pdev);
+ pci_set_master(pci);
/* set up resources */
memset(res, 0x00, sizeof(struct resource) * ARRAY_SIZE(res));
- res[0].start = pci_resource_start(pdev, 0);
- res[0].end = pci_resource_end(pdev, 0);
+ res[0].start = pci_resource_start(pci, 0);
+ res[0].end = pci_resource_end(pci, 0);
res[0].name = PCI_DRIVER_NAME;
res[0].flags = IORESOURCE_MEM;
- res[1].start = pci_irq_vector(pdev, 0);
+ res[1].start = pci_irq_vector(pci, 0);
res[1].name = PCI_DRIVER_NAME;
res[1].flags = IORESOURCE_IRQ;
- dev_info(&pdev->dev, "EMAC physical base addr: %pa\n",
+ dev_info(&pci->dev, "EMAC physical base addr: %pa\n",
&res[0].start);
/* set up macb platform data */
memset(&plat_data, 0, sizeof(plat_data));
/* initialize clocks */
- plat_data.pclk = clk_register_fixed_rate(&pdev->dev, "pclk", NULL, 0,
+ plat_data.pclk = clk_register_fixed_rate(&pci->dev, "pclk", NULL, 0,
GEM_PCLK_RATE);
if (IS_ERR(plat_data.pclk)) {
err = PTR_ERR(plat_data.pclk);
goto err_pclk_register;
}
- plat_data.hclk = clk_register_fixed_rate(&pdev->dev, "hclk", NULL, 0,
+ plat_data.hclk = clk_register_fixed_rate(&pci->dev, "hclk", NULL, 0,
GEM_HCLK_RATE);
if (IS_ERR(plat_data.hclk)) {
err = PTR_ERR(plat_data.hclk);
@@ -74,24 +74,24 @@ static int macb_probe(struct pci_dev *pdev, const struct pci_device_id *id)
/* set up platform device info */
memset(&plat_info, 0, sizeof(plat_info));
- plat_info.parent = &pdev->dev;
- plat_info.fwnode = pdev->dev.fwnode;
+ plat_info.parent = &pci->dev;
+ plat_info.fwnode = pci->dev.fwnode;
plat_info.name = PLAT_DRIVER_NAME;
- plat_info.id = pdev->devfn;
+ plat_info.id = pci->devfn;
plat_info.res = res;
plat_info.num_res = ARRAY_SIZE(res);
plat_info.data = &plat_data;
plat_info.size_data = sizeof(plat_data);
- plat_info.dma_mask = pdev->dma_mask;
+ plat_info.dma_mask = pci->dma_mask;
/* register platform device */
- plat_dev = platform_device_register_full(&plat_info);
- if (IS_ERR(plat_dev)) {
- err = PTR_ERR(plat_dev);
+ pdev = platform_device_register_full(&plat_info);
+ if (IS_ERR(pdev)) {
+ err = PTR_ERR(pdev);
goto err_plat_dev_register;
}
- pci_set_drvdata(pdev, plat_dev);
+ pci_set_drvdata(pci, pdev);
return 0;
@@ -105,14 +105,14 @@ static int macb_probe(struct pci_dev *pdev, const struct pci_device_id *id)
return err;
}
-static void macb_remove(struct pci_dev *pdev)
+static void macb_remove(struct pci_dev *pci)
{
- struct platform_device *plat_dev = pci_get_drvdata(pdev);
- struct macb_platform_data *plat_data = dev_get_platdata(&plat_dev->dev);
+ struct platform_device *pdev = pci_get_drvdata(pci);
+ struct macb_platform_data *plat_data = dev_get_platdata(&pdev->dev);
clk_unregister(plat_data->pclk);
clk_unregister(plat_data->hclk);
- platform_device_unregister(plat_dev);
+ platform_device_unregister(pdev);
}
static const struct pci_device_id dev_id_table[] = {
diff --git a/drivers/net/ethernet/cadence/macb_ptp.c b/drivers/net/ethernet/cadence/macb_ptp.c
index d91f7b1aa39c..e5195d7dac1d 100644
--- a/drivers/net/ethernet/cadence/macb_ptp.c
+++ b/drivers/net/ethernet/cadence/macb_ptp.c
@@ -324,9 +324,9 @@ void gem_ptp_txstamp(struct macb *bp, struct sk_buff *skb,
skb_tstamp_tx(skb, &shhwtstamps);
}
-void gem_ptp_init(struct net_device *dev)
+void gem_ptp_init(struct net_device *netdev)
{
- struct macb *bp = netdev_priv(dev);
+ struct macb *bp = netdev_priv(netdev);
bp->ptp_clock_info = gem_ptp_caps_template;
@@ -334,7 +334,7 @@ void gem_ptp_init(struct net_device *dev)
bp->tsu_rate = bp->ptp_info->get_tsu_rate(bp);
bp->ptp_clock_info.max_adj = bp->ptp_info->get_ptp_max_adj();
gem_ptp_init_timer(bp);
- bp->ptp_clock = ptp_clock_register(&bp->ptp_clock_info, &dev->dev);
+ bp->ptp_clock = ptp_clock_register(&bp->ptp_clock_info, &netdev->dev);
if (IS_ERR(bp->ptp_clock)) {
pr_err("ptp clock register failed: %ld\n",
PTR_ERR(bp->ptp_clock));
@@ -353,9 +353,9 @@ void gem_ptp_init(struct net_device *dev)
GEM_PTP_TIMER_NAME);
}
-void gem_ptp_remove(struct net_device *ndev)
+void gem_ptp_remove(struct net_device *netdev)
{
- struct macb *bp = netdev_priv(ndev);
+ struct macb *bp = netdev_priv(netdev);
if (bp->ptp_clock) {
ptp_clock_unregister(bp->ptp_clock);
@@ -378,10 +378,10 @@ static int gem_ptp_set_ts_mode(struct macb *bp,
return 0;
}
-int gem_get_hwtst(struct net_device *dev,
+int gem_get_hwtst(struct net_device *netdev,
struct kernel_hwtstamp_config *tstamp_config)
{
- struct macb *bp = netdev_priv(dev);
+ struct macb *bp = netdev_priv(netdev);
*tstamp_config = bp->tstamp_config;
if (!macb_dma_ptp(bp))
@@ -402,13 +402,13 @@ static void gem_ptp_set_one_step_sync(struct macb *bp, u8 enable)
macb_writel(bp, NCR, reg_val & ~MACB_BIT(OSSMODE));
}
-int gem_set_hwtst(struct net_device *dev,
+int gem_set_hwtst(struct net_device *netdev,
struct kernel_hwtstamp_config *tstamp_config,
struct netlink_ext_ack *extack)
{
enum macb_bd_control tx_bd_control = TSTAMP_DISABLED;
enum macb_bd_control rx_bd_control = TSTAMP_DISABLED;
- struct macb *bp = netdev_priv(dev);
+ struct macb *bp = netdev_priv(netdev);
u32 regval;
if (!macb_dma_ptp(bp))
--
2.53.0
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH net-next 02/11] net: macb: unify `struct macb *` naming convention
2026-04-01 16:39 [PATCH net-next 00/11] net: macb: implement context swapping Théo Lebrun
2026-04-01 16:39 ` [PATCH net-next 01/11] net: macb: unify device pointer naming convention Théo Lebrun
@ 2026-04-01 16:39 ` Théo Lebrun
2026-04-01 16:39 ` [PATCH net-next 03/11] net: macb: unify queue index variable naming convention and types Théo Lebrun
` (9 subsequent siblings)
11 siblings, 0 replies; 24+ messages in thread
From: Théo Lebrun @ 2026-04-01 16:39 UTC (permalink / raw)
To: Nicolas Ferre, Claudiu Beznea, Andrew Lunn, David S. Miller,
Eric Dumazet, Jakub Kicinski, Paolo Abeni, Richard Cochran,
Russell King
Cc: Paolo Valerio, Conor Dooley, Nicolai Buchwitz,
Vladimir Kondratiev, Gregory CLEMENT, Benoît Monin,
Tawfik Bayouk, Thomas Petazzoni, Maxime Chevallier, netdev,
linux-kernel, Théo Lebrun
For historical reason, MACB has both:
struct macb *bp;
struct macb *lp; // used in at91ether functions
Use only the former.
Signed-off-by: Théo Lebrun <theo.lebrun@bootlin.com>
---
drivers/net/ethernet/cadence/macb_main.c | 176 ++++++++++++++++---------------
1 file changed, 91 insertions(+), 85 deletions(-)
diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
index 00bd662b5e46..05ccb6f186f7 100644
--- a/drivers/net/ethernet/cadence/macb_main.c
+++ b/drivers/net/ethernet/cadence/macb_main.c
@@ -4958,71 +4958,72 @@ static int macb_init(struct platform_device *pdev,
static struct sifive_fu540_macb_mgmt *mgmt;
-static int at91ether_alloc_coherent(struct macb *lp)
+static int at91ether_alloc_coherent(struct macb *bp)
{
- struct macb_queue *q = &lp->queues[0];
+ struct macb_queue *queue = &bp->queues[0];
- q->rx_ring = dma_alloc_coherent(&lp->pdev->dev,
- (AT91ETHER_MAX_RX_DESCR *
- macb_dma_desc_get_size(lp)),
- &q->rx_ring_dma, GFP_KERNEL);
- if (!q->rx_ring)
+ queue->rx_ring = dma_alloc_coherent(&bp->pdev->dev,
+ (AT91ETHER_MAX_RX_DESCR *
+ macb_dma_desc_get_size(bp)),
+ &queue->rx_ring_dma, GFP_KERNEL);
+ if (!queue->rx_ring)
return -ENOMEM;
- q->rx_buffers = dma_alloc_coherent(&lp->pdev->dev,
- AT91ETHER_MAX_RX_DESCR *
- AT91ETHER_MAX_RBUFF_SZ,
- &q->rx_buffers_dma, GFP_KERNEL);
- if (!q->rx_buffers) {
- dma_free_coherent(&lp->pdev->dev,
+ queue->rx_buffers = dma_alloc_coherent(&bp->pdev->dev,
+ AT91ETHER_MAX_RX_DESCR *
+ AT91ETHER_MAX_RBUFF_SZ,
+ &queue->rx_buffers_dma,
+ GFP_KERNEL);
+ if (!queue->rx_buffers) {
+ dma_free_coherent(&bp->pdev->dev,
AT91ETHER_MAX_RX_DESCR *
- macb_dma_desc_get_size(lp),
- q->rx_ring, q->rx_ring_dma);
- q->rx_ring = NULL;
+ macb_dma_desc_get_size(bp),
+ queue->rx_ring, queue->rx_ring_dma);
+ queue->rx_ring = NULL;
return -ENOMEM;
}
return 0;
}
-static void at91ether_free_coherent(struct macb *lp)
+static void at91ether_free_coherent(struct macb *bp)
{
- struct macb_queue *q = &lp->queues[0];
+ struct macb_queue *queue = &bp->queues[0];
- if (q->rx_ring) {
- dma_free_coherent(&lp->pdev->dev,
+ if (queue->rx_ring) {
+ dma_free_coherent(&bp->pdev->dev,
AT91ETHER_MAX_RX_DESCR *
- macb_dma_desc_get_size(lp),
- q->rx_ring, q->rx_ring_dma);
- q->rx_ring = NULL;
+ macb_dma_desc_get_size(bp),
+ queue->rx_ring, queue->rx_ring_dma);
+ queue->rx_ring = NULL;
}
- if (q->rx_buffers) {
- dma_free_coherent(&lp->pdev->dev,
+ if (queue->rx_buffers) {
+ dma_free_coherent(&bp->pdev->dev,
AT91ETHER_MAX_RX_DESCR *
AT91ETHER_MAX_RBUFF_SZ,
- q->rx_buffers, q->rx_buffers_dma);
- q->rx_buffers = NULL;
+ queue->rx_buffers, queue->rx_buffers_dma);
+ queue->rx_buffers = NULL;
}
}
/* Initialize and start the Receiver and Transmit subsystems */
-static int at91ether_start(struct macb *lp)
+static int at91ether_start(struct macb *bp)
{
- struct macb_queue *q = &lp->queues[0];
+ struct macb_queue *queue = &bp->queues[0];
struct macb_dma_desc *desc;
dma_addr_t addr;
u32 ctl;
int i, ret;
- ret = at91ether_alloc_coherent(lp);
+ ret = at91ether_alloc_coherent(bp);
if (ret)
return ret;
- addr = q->rx_buffers_dma;
+ addr = queue->rx_buffers_dma;
for (i = 0; i < AT91ETHER_MAX_RX_DESCR; i++) {
- desc = macb_rx_desc(q, i);
- macb_set_addr(lp, desc, addr);
+ desc = macb_rx_desc(queue, i);
+ macb_set_addr(bp, desc, addr);
desc->ctrl = 0;
addr += AT91ETHER_MAX_RBUFF_SZ;
}
@@ -5031,17 +5032,17 @@ static int at91ether_start(struct macb *lp)
desc->addr |= MACB_BIT(RX_WRAP);
/* Reset buffer index */
- q->rx_tail = 0;
+ queue->rx_tail = 0;
/* Program address of descriptor list in Rx Buffer Queue register */
- macb_writel(lp, RBQP, q->rx_ring_dma);
+ macb_writel(bp, RBQP, queue->rx_ring_dma);
/* Enable Receive and Transmit */
- ctl = macb_readl(lp, NCR);
- macb_writel(lp, NCR, ctl | MACB_BIT(RE) | MACB_BIT(TE));
+ ctl = macb_readl(bp, NCR);
+ macb_writel(bp, NCR, ctl | MACB_BIT(RE) | MACB_BIT(TE));
/* Enable MAC interrupts */
- macb_writel(lp, IER, MACB_BIT(RCOMP) |
+ macb_writel(bp, IER, MACB_BIT(RCOMP) |
MACB_BIT(RXUBR) |
MACB_BIT(ISR_TUND) |
MACB_BIT(ISR_RLE) |
@@ -5052,12 +5053,12 @@ static int at91ether_start(struct macb *lp)
return 0;
}
-static void at91ether_stop(struct macb *lp)
+static void at91ether_stop(struct macb *bp)
{
u32 ctl;
/* Disable MAC interrupts */
- macb_writel(lp, IDR, MACB_BIT(RCOMP) |
+ macb_writel(bp, IDR, MACB_BIT(RCOMP) |
MACB_BIT(RXUBR) |
MACB_BIT(ISR_TUND) |
MACB_BIT(ISR_RLE) |
@@ -5066,35 +5067,35 @@ static void at91ether_stop(struct macb *lp)
MACB_BIT(HRESP));
/* Disable Receiver and Transmitter */
- ctl = macb_readl(lp, NCR);
- macb_writel(lp, NCR, ctl & ~(MACB_BIT(TE) | MACB_BIT(RE)));
+ ctl = macb_readl(bp, NCR);
+ macb_writel(bp, NCR, ctl & ~(MACB_BIT(TE) | MACB_BIT(RE)));
/* Free resources. */
- at91ether_free_coherent(lp);
+ at91ether_free_coherent(bp);
}
/* Open the ethernet interface */
static int at91ether_open(struct net_device *netdev)
{
- struct macb *lp = netdev_priv(netdev);
+ struct macb *bp = netdev_priv(netdev);
u32 ctl;
int ret;
- ret = pm_runtime_resume_and_get(&lp->pdev->dev);
+ ret = pm_runtime_resume_and_get(&bp->pdev->dev);
if (ret < 0)
return ret;
/* Clear internal statistics */
- ctl = macb_readl(lp, NCR);
- macb_writel(lp, NCR, ctl | MACB_BIT(CLRSTAT));
+ ctl = macb_readl(bp, NCR);
+ macb_writel(bp, NCR, ctl | MACB_BIT(CLRSTAT));
- macb_set_hwaddr(lp);
+ macb_set_hwaddr(bp);
- ret = at91ether_start(lp);
+ ret = at91ether_start(bp);
if (ret)
goto pm_exit;
- ret = macb_phylink_connect(lp);
+ ret = macb_phylink_connect(bp);
if (ret)
goto stop;
@@ -5103,25 +5104,25 @@ static int at91ether_open(struct net_device *netdev)
return 0;
stop:
- at91ether_stop(lp);
+ at91ether_stop(bp);
pm_exit:
- pm_runtime_put_sync(&lp->pdev->dev);
+ pm_runtime_put_sync(&bp->pdev->dev);
return ret;
}
/* Close the interface */
static int at91ether_close(struct net_device *netdev)
{
- struct macb *lp = netdev_priv(netdev);
+ struct macb *bp = netdev_priv(netdev);
netif_stop_queue(netdev);
- phylink_stop(lp->phylink);
- phylink_disconnect_phy(lp->phylink);
+ phylink_stop(bp->phylink);
+ phylink_disconnect_phy(bp->phylink);
- at91ether_stop(lp);
+ at91ether_stop(bp);
- pm_runtime_put(&lp->pdev->dev);
+ pm_runtime_put(&bp->pdev->dev);
return 0;
}
@@ -5130,19 +5131,21 @@ static int at91ether_close(struct net_device *netdev)
static netdev_tx_t at91ether_start_xmit(struct sk_buff *skb,
struct net_device *netdev)
{
- struct macb *lp = netdev_priv(netdev);
+ struct macb *bp = netdev_priv(netdev);
+ struct device *dev = &bp->pdev->dev;
- if (macb_readl(lp, TSR) & MACB_BIT(RM9200_BNQ)) {
+ if (macb_readl(bp, TSR) & MACB_BIT(RM9200_BNQ)) {
int desc = 0;
netif_stop_queue(netdev);
/* Store packet information (to free when Tx completed) */
- lp->rm9200_txq[desc].skb = skb;
- lp->rm9200_txq[desc].size = skb->len;
- lp->rm9200_txq[desc].mapping = dma_map_single(&lp->pdev->dev, skb->data,
- skb->len, DMA_TO_DEVICE);
- if (dma_mapping_error(&lp->pdev->dev, lp->rm9200_txq[desc].mapping)) {
+ bp->rm9200_txq[desc].skb = skb;
+ bp->rm9200_txq[desc].size = skb->len;
+ bp->rm9200_txq[desc].mapping = dma_map_single(dev, skb->data,
+ skb->len,
+ DMA_TO_DEVICE);
+ if (dma_mapping_error(dev, bp->rm9200_txq[desc].mapping)) {
dev_kfree_skb_any(skb);
netdev->stats.tx_dropped++;
netdev_err(netdev, "%s: DMA mapping error\n", __func__);
@@ -5150,9 +5153,9 @@ static netdev_tx_t at91ether_start_xmit(struct sk_buff *skb,
}
/* Set address of the data in the Transmit Address register */
- macb_writel(lp, TAR, lp->rm9200_txq[desc].mapping);
+ macb_writel(bp, TAR, bp->rm9200_txq[desc].mapping);
/* Set length of the packet in the Transmit Control register */
- macb_writel(lp, TCR, skb->len);
+ macb_writel(bp, TCR, skb->len);
} else {
netdev_err(netdev, "%s called, but device is busy!\n",
@@ -5168,16 +5171,17 @@ static netdev_tx_t at91ether_start_xmit(struct sk_buff *skb,
*/
static void at91ether_rx(struct net_device *netdev)
{
- struct macb *lp = netdev_priv(netdev);
- struct macb_queue *q = &lp->queues[0];
+ struct macb *bp = netdev_priv(netdev);
+ struct macb_queue *queue = &bp->queues[0];
struct macb_dma_desc *desc;
unsigned char *p_recv;
struct sk_buff *skb;
unsigned int pktlen;
- desc = macb_rx_desc(q, q->rx_tail);
+ desc = macb_rx_desc(queue, queue->rx_tail);
while (desc->addr & MACB_BIT(RX_USED)) {
- p_recv = q->rx_buffers + q->rx_tail * AT91ETHER_MAX_RBUFF_SZ;
+ p_recv = queue->rx_buffers +
+ queue->rx_tail * AT91ETHER_MAX_RBUFF_SZ;
pktlen = MACB_BF(RX_FRMLEN, desc->ctrl);
skb = netdev_alloc_skb(netdev, pktlen + 2);
if (skb) {
@@ -5199,12 +5203,12 @@ static void at91ether_rx(struct net_device *netdev)
desc->addr &= ~MACB_BIT(RX_USED);
/* wrap after last buffer */
- if (q->rx_tail == AT91ETHER_MAX_RX_DESCR - 1)
- q->rx_tail = 0;
+ if (queue->rx_tail == AT91ETHER_MAX_RX_DESCR - 1)
+ queue->rx_tail = 0;
else
- q->rx_tail++;
+ queue->rx_tail++;
- desc = macb_rx_desc(q, q->rx_tail);
+ desc = macb_rx_desc(queue, queue->rx_tail);
}
}
@@ -5212,14 +5216,14 @@ static void at91ether_rx(struct net_device *netdev)
static irqreturn_t at91ether_interrupt(int irq, void *dev_id)
{
struct net_device *netdev = dev_id;
- struct macb *lp = netdev_priv(netdev);
+ struct macb *bp = netdev_priv(netdev);
u32 intstatus, ctl;
unsigned int desc;
/* MAC Interrupt Status register indicates what interrupts are pending.
* It is automatically cleared once read.
*/
- intstatus = macb_readl(lp, ISR);
+ intstatus = macb_readl(bp, ISR);
/* Receive complete */
if (intstatus & MACB_BIT(RCOMP))
@@ -5232,23 +5236,25 @@ static irqreturn_t at91ether_interrupt(int irq, void *dev_id)
netdev->stats.tx_errors++;
desc = 0;
- if (lp->rm9200_txq[desc].skb) {
- dev_consume_skb_irq(lp->rm9200_txq[desc].skb);
- lp->rm9200_txq[desc].skb = NULL;
- dma_unmap_single(&lp->pdev->dev, lp->rm9200_txq[desc].mapping,
- lp->rm9200_txq[desc].size, DMA_TO_DEVICE);
+ if (bp->rm9200_txq[desc].skb) {
+ dev_consume_skb_irq(bp->rm9200_txq[desc].skb);
+ bp->rm9200_txq[desc].skb = NULL;
+ dma_unmap_single(&bp->pdev->dev,
+ bp->rm9200_txq[desc].mapping,
+ bp->rm9200_txq[desc].size,
+ DMA_TO_DEVICE);
netdev->stats.tx_packets++;
- netdev->stats.tx_bytes += lp->rm9200_txq[desc].size;
+ netdev->stats.tx_bytes += bp->rm9200_txq[desc].size;
}
netif_wake_queue(netdev);
}
/* Work-around for EMAC Errata section 41.3.1 */
if (intstatus & MACB_BIT(RXUBR)) {
- ctl = macb_readl(lp, NCR);
- macb_writel(lp, NCR, ctl & ~MACB_BIT(RE));
+ ctl = macb_readl(bp, NCR);
+ macb_writel(bp, NCR, ctl & ~MACB_BIT(RE));
wmb();
- macb_writel(lp, NCR, ctl | MACB_BIT(RE));
+ macb_writel(bp, NCR, ctl | MACB_BIT(RE));
}
if (intstatus & MACB_BIT(ISR_ROVR))
--
2.53.0
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH net-next 03/11] net: macb: unify queue index variable naming convention and types
2026-04-01 16:39 [PATCH net-next 00/11] net: macb: implement context swapping Théo Lebrun
2026-04-01 16:39 ` [PATCH net-next 01/11] net: macb: unify device pointer naming convention Théo Lebrun
2026-04-01 16:39 ` [PATCH net-next 02/11] net: macb: unify `struct macb *` " Théo Lebrun
@ 2026-04-01 16:39 ` Théo Lebrun
2026-04-01 16:39 ` [PATCH net-next 04/11] net: macb: enforce reverse christmas tree (RCT) convention Théo Lebrun
` (8 subsequent siblings)
11 siblings, 0 replies; 24+ messages in thread
From: Théo Lebrun @ 2026-04-01 16:39 UTC (permalink / raw)
To: Nicolas Ferre, Claudiu Beznea, Andrew Lunn, David S. Miller,
Eric Dumazet, Jakub Kicinski, Paolo Abeni, Richard Cochran,
Russell King
Cc: Paolo Valerio, Conor Dooley, Nicolai Buchwitz,
Vladimir Kondratiev, Gregory CLEMENT, Benoît Monin,
Tawfik Bayouk, Thomas Petazzoni, Maxime Chevallier, netdev,
linux-kernel, Théo Lebrun
Variables are named q or queue_index. Types are int, unsigned int, u32
and u16. Use `unsigned int q` everywhere.
Skip over taprio functions. They use `u8 queue_id` which fits with the
`struct macb_queue_enst_config` field. Using `queue_id` everywhere
would be too verbose.
Signed-off-by: Théo Lebrun <theo.lebrun@bootlin.com>
---
drivers/net/ethernet/cadence/macb_main.c | 30 +++++++++++++++---------------
1 file changed, 15 insertions(+), 15 deletions(-)
diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
index 05ccb6f186f7..087401163771 100644
--- a/drivers/net/ethernet/cadence/macb_main.c
+++ b/drivers/net/ethernet/cadence/macb_main.c
@@ -873,7 +873,7 @@ static void gem_shuffle_tx_one_ring(struct macb_queue *queue)
static void gem_shuffle_tx_rings(struct macb *bp)
{
struct macb_queue *queue;
- int q;
+ unsigned int q;
for (q = 0, queue = bp->queues; q < bp->num_queues; q++, queue++)
gem_shuffle_tx_one_ring(queue);
@@ -1254,7 +1254,7 @@ static void macb_tx_error_task(struct work_struct *work)
tx_error_task);
bool halt_timeout = false;
struct macb *bp = queue->bp;
- u32 queue_index;
+ unsigned int q;
u32 packets = 0;
u32 bytes = 0;
struct macb_tx_skb *tx_skb;
@@ -1263,9 +1263,9 @@ static void macb_tx_error_task(struct work_struct *work)
unsigned int tail;
unsigned long flags;
- queue_index = queue - bp->queues;
+ q = queue - bp->queues;
netdev_vdbg(bp->netdev, "macb_tx_error_task: q = %u, t = %u, h = %u\n",
- queue_index, queue->tx_tail, queue->tx_head);
+ q, queue->tx_tail, queue->tx_head);
/* Prevent the queue NAPI TX poll from running, as it calls
* macb_tx_complete(), which in turn may call netif_wake_subqueue().
@@ -1338,7 +1338,7 @@ static void macb_tx_error_task(struct work_struct *work)
macb_tx_unmap(bp, tx_skb, 0);
}
- netdev_tx_completed_queue(netdev_get_tx_queue(bp->netdev, queue_index),
+ netdev_tx_completed_queue(netdev_get_tx_queue(bp->netdev, q),
packets, bytes);
/* Set end of TX queue */
@@ -1403,7 +1403,7 @@ static bool ptp_one_step_sync(struct sk_buff *skb)
static int macb_tx_complete(struct macb_queue *queue, int budget)
{
struct macb *bp = queue->bp;
- u16 queue_index = queue - bp->queues;
+ unsigned int q = queue - bp->queues;
unsigned long flags;
unsigned int tail;
unsigned int head;
@@ -1465,14 +1465,14 @@ static int macb_tx_complete(struct macb_queue *queue, int budget)
}
}
- netdev_tx_completed_queue(netdev_get_tx_queue(bp->netdev, queue_index),
+ netdev_tx_completed_queue(netdev_get_tx_queue(bp->netdev, q),
packets, bytes);
queue->tx_tail = tail;
- if (__netif_subqueue_stopped(bp->netdev, queue_index) &&
+ if (__netif_subqueue_stopped(bp->netdev, q) &&
CIRC_CNT(queue->tx_head, queue->tx_tail,
bp->tx_ring_size) <= MACB_TX_WAKEUP_THRESH(bp))
- netif_wake_subqueue(bp->netdev, queue_index);
+ netif_wake_subqueue(bp->netdev, q);
spin_unlock_irqrestore(&queue->tx_ptr_lock, flags);
if (packets)
@@ -2496,10 +2496,10 @@ static int macb_pad_and_fcs(struct sk_buff **skb, struct net_device *netdev)
static netdev_tx_t macb_start_xmit(struct sk_buff *skb,
struct net_device *netdev)
{
- u16 queue_index = skb_get_queue_mapping(skb);
struct macb *bp = netdev_priv(netdev);
- struct macb_queue *queue = &bp->queues[queue_index];
+ unsigned int q = skb_get_queue_mapping(skb);
unsigned int desc_cnt, nr_frags, frag_size, f;
+ struct macb_queue *queue = &bp->queues[q];
unsigned int hdrlen;
unsigned long flags;
bool is_lso;
@@ -2539,7 +2539,7 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *skb,
#if defined(DEBUG) && defined(VERBOSE_DEBUG)
netdev_vdbg(bp->netdev,
"start_xmit: queue %hu len %u head %p data %p tail %p end %p\n",
- queue_index, skb->len, skb->head, skb->data,
+ q, skb->len, skb->head, skb->data,
skb_tail_pointer(skb), skb_end_pointer(skb));
print_hex_dump(KERN_DEBUG, "data: ", DUMP_PREFIX_OFFSET, 16, 1,
skb->data, 16, true);
@@ -2565,7 +2565,7 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *skb,
/* This is a hard error, log it. */
if (CIRC_SPACE(queue->tx_head, queue->tx_tail,
bp->tx_ring_size) < desc_cnt) {
- netif_stop_subqueue(netdev, queue_index);
+ netif_stop_subqueue(netdev, q);
netdev_dbg(netdev, "tx_head = %u, tx_tail = %u\n",
queue->tx_head, queue->tx_tail);
ret = NETDEV_TX_BUSY;
@@ -2581,7 +2581,7 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *skb,
/* Make newly initialized descriptor visible to hardware */
wmb();
skb_tx_timestamp(skb);
- netdev_tx_sent_queue(netdev_get_tx_queue(bp->netdev, queue_index),
+ netdev_tx_sent_queue(netdev_get_tx_queue(bp->netdev, q),
skb->len);
spin_lock(&bp->lock);
@@ -2590,7 +2590,7 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *skb,
spin_unlock(&bp->lock);
if (CIRC_SPACE(queue->tx_head, queue->tx_tail, bp->tx_ring_size) < 1)
- netif_stop_subqueue(netdev, queue_index);
+ netif_stop_subqueue(netdev, q);
unlock:
spin_unlock_irqrestore(&queue->tx_ptr_lock, flags);
--
2.53.0
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH net-next 04/11] net: macb: enforce reverse christmas tree (RCT) convention
2026-04-01 16:39 [PATCH net-next 00/11] net: macb: implement context swapping Théo Lebrun
` (2 preceding siblings ...)
2026-04-01 16:39 ` [PATCH net-next 03/11] net: macb: unify queue index variable naming convention and types Théo Lebrun
@ 2026-04-01 16:39 ` Théo Lebrun
2026-04-01 16:39 ` [PATCH net-next 05/11] net: macb: allocate tieoff descriptor once across device lifetime Théo Lebrun
` (7 subsequent siblings)
11 siblings, 0 replies; 24+ messages in thread
From: Théo Lebrun @ 2026-04-01 16:39 UTC (permalink / raw)
To: Nicolas Ferre, Claudiu Beznea, Andrew Lunn, David S. Miller,
Eric Dumazet, Jakub Kicinski, Paolo Abeni, Richard Cochran,
Russell King
Cc: Paolo Valerio, Conor Dooley, Nicolai Buchwitz,
Vladimir Kondratiev, Gregory CLEMENT, Benoît Monin,
Tawfik Bayouk, Thomas Petazzoni, Maxime Chevallier, netdev,
linux-kernel, Théo Lebrun
Enforce the reverse christmas tree convention in those functions:
macb_tx_error_task()
gem_rx_refill()
gem_rx()
macb_rx_frame()
macb_init_rx_ring()
macb_rx()
macb_rx_pending()
macb_start_xmit()
The goal is to minimise unrelated diff in future patches.
Signed-off-by: Théo Lebrun <theo.lebrun@bootlin.com>
---
drivers/net/ethernet/cadence/macb_main.c | 61 ++++++++++++++++----------------
1 file changed, 30 insertions(+), 31 deletions(-)
diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
index 087401163771..081f220f6756 100644
--- a/drivers/net/ethernet/cadence/macb_main.c
+++ b/drivers/net/ethernet/cadence/macb_main.c
@@ -1250,20 +1250,19 @@ static dma_addr_t macb_get_addr(struct macb *bp, struct macb_dma_desc *desc)
static void macb_tx_error_task(struct work_struct *work)
{
- struct macb_queue *queue = container_of(work, struct macb_queue,
- tx_error_task);
- bool halt_timeout = false;
- struct macb *bp = queue->bp;
- unsigned int q;
- u32 packets = 0;
- u32 bytes = 0;
- struct macb_tx_skb *tx_skb;
- struct macb_dma_desc *desc;
- struct sk_buff *skb;
- unsigned int tail;
- unsigned long flags;
+ struct macb_queue *queue = container_of(work, struct macb_queue,
+ tx_error_task);
+ unsigned int q = queue - queue->bp->queues;
+ struct macb *bp = queue->bp;
+ struct macb_tx_skb *tx_skb;
+ struct macb_dma_desc *desc;
+ bool halt_timeout = false;
+ struct sk_buff *skb;
+ unsigned long flags;
+ unsigned int tail;
+ u32 packets = 0;
+ u32 bytes = 0;
- q = queue - bp->queues;
netdev_vdbg(bp->netdev, "macb_tx_error_task: q = %u, t = %u, h = %u\n",
q, queue->tx_tail, queue->tx_head);
@@ -1483,11 +1482,11 @@ static int macb_tx_complete(struct macb_queue *queue, int budget)
static void gem_rx_refill(struct macb_queue *queue)
{
- unsigned int entry;
- struct sk_buff *skb;
- dma_addr_t paddr;
struct macb *bp = queue->bp;
struct macb_dma_desc *desc;
+ struct sk_buff *skb;
+ unsigned int entry;
+ dma_addr_t paddr;
while (CIRC_SPACE(queue->rx_prepared_head, queue->rx_tail,
bp->rx_ring_size) > 0) {
@@ -1580,11 +1579,11 @@ static int gem_rx(struct macb_queue *queue, struct napi_struct *napi,
int budget)
{
struct macb *bp = queue->bp;
- unsigned int len;
- unsigned int entry;
- struct sk_buff *skb;
- struct macb_dma_desc *desc;
- int count = 0;
+ struct macb_dma_desc *desc;
+ struct sk_buff *skb;
+ unsigned int entry;
+ unsigned int len;
+ int count = 0;
while (count < budget) {
u32 ctrl;
@@ -1670,12 +1669,12 @@ static int gem_rx(struct macb_queue *queue, struct napi_struct *napi,
static int macb_rx_frame(struct macb_queue *queue, struct napi_struct *napi,
unsigned int first_frag, unsigned int last_frag)
{
- unsigned int len;
- unsigned int frag;
+ struct macb *bp = queue->bp;
+ struct macb_dma_desc *desc;
unsigned int offset;
struct sk_buff *skb;
- struct macb_dma_desc *desc;
- struct macb *bp = queue->bp;
+ unsigned int frag;
+ unsigned int len;
desc = macb_rx_desc(queue, last_frag);
len = desc->ctrl & bp->rx_frm_len_mask;
@@ -1751,9 +1750,9 @@ static int macb_rx_frame(struct macb_queue *queue, struct napi_struct *napi,
static inline void macb_init_rx_ring(struct macb_queue *queue)
{
+ struct macb_dma_desc *desc = NULL;
struct macb *bp = queue->bp;
dma_addr_t addr;
- struct macb_dma_desc *desc = NULL;
int i;
addr = queue->rx_buffers_dma;
@@ -1772,9 +1771,9 @@ static int macb_rx(struct macb_queue *queue, struct napi_struct *napi,
{
struct macb *bp = queue->bp;
bool reset_rx_queue = false;
- int received = 0;
- unsigned int tail;
int first_frag = -1;
+ unsigned int tail;
+ int received = 0;
for (tail = queue->rx_tail; budget > 0; tail++) {
struct macb_dma_desc *desc = macb_rx_desc(queue, tail);
@@ -1849,8 +1848,8 @@ static int macb_rx(struct macb_queue *queue, struct napi_struct *napi,
static bool macb_rx_pending(struct macb_queue *queue)
{
struct macb *bp = queue->bp;
- unsigned int entry;
- struct macb_dma_desc *desc;
+ struct macb_dma_desc *desc;
+ unsigned int entry;
entry = macb_rx_ring_wrap(bp, queue->rx_tail);
desc = macb_rx_desc(queue, entry);
@@ -2500,10 +2499,10 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *skb,
unsigned int q = skb_get_queue_mapping(skb);
unsigned int desc_cnt, nr_frags, frag_size, f;
struct macb_queue *queue = &bp->queues[q];
+ netdev_tx_t ret = NETDEV_TX_OK;
unsigned int hdrlen;
unsigned long flags;
bool is_lso;
- netdev_tx_t ret = NETDEV_TX_OK;
if (macb_clear_csum(skb)) {
dev_kfree_skb_any(skb);
--
2.53.0
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH net-next 05/11] net: macb: allocate tieoff descriptor once across device lifetime
2026-04-01 16:39 [PATCH net-next 00/11] net: macb: implement context swapping Théo Lebrun
` (3 preceding siblings ...)
2026-04-01 16:39 ` [PATCH net-next 04/11] net: macb: enforce reverse christmas tree (RCT) convention Théo Lebrun
@ 2026-04-01 16:39 ` Théo Lebrun
2026-04-02 11:14 ` Nicolai Buchwitz
2026-04-01 16:39 ` [PATCH net-next 06/11] net: macb: introduce macb_context struct for buffer management Théo Lebrun
` (6 subsequent siblings)
11 siblings, 1 reply; 24+ messages in thread
From: Théo Lebrun @ 2026-04-01 16:39 UTC (permalink / raw)
To: Nicolas Ferre, Claudiu Beznea, Andrew Lunn, David S. Miller,
Eric Dumazet, Jakub Kicinski, Paolo Abeni, Richard Cochran,
Russell King
Cc: Paolo Valerio, Conor Dooley, Nicolai Buchwitz,
Vladimir Kondratiev, Gregory CLEMENT, Benoît Monin,
Tawfik Bayouk, Thomas Petazzoni, Maxime Chevallier, netdev,
linux-kernel, Théo Lebrun
The tieoff descriptor is a RX DMA descriptor ring of size one. It gets
configured onto queues for Wake-on-LAN during system-wide suspend when
hardware does not support disabling individual queues
(MACB_CAPS_QUEUE_DISABLE).
MACB/GEM driver allocates it alongside the main RX ring
inside macb_alloc_consistent() at open. Free is done by
macb_free_consistent() at close.
Change to allocate once at probe and free on probe failure or device
removal. This makes the tieoff descriptor lifetime much longer,
avoiding repeating coherent buffer allocation on each open/close cycle.
Main benefit: we dissociate its lifetime from the main ring's lifetime.
That way there is less work to be doing on resources (re)alloc. This
currently happens on close/open, but will soon also happen on context
swap operations (set_ringparam, change_mtu, set_channels, etc).
Signed-off-by: Théo Lebrun <theo.lebrun@bootlin.com>
---
drivers/net/ethernet/cadence/macb_main.c | 70 ++++++++++++++++----------------
1 file changed, 36 insertions(+), 34 deletions(-)
diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
index 081f220f6756..d5023fdc0756 100644
--- a/drivers/net/ethernet/cadence/macb_main.c
+++ b/drivers/net/ethernet/cadence/macb_main.c
@@ -2679,12 +2679,6 @@ static void macb_free_consistent(struct macb *bp)
unsigned int q;
size_t size;
- if (bp->rx_ring_tieoff) {
- dma_free_coherent(dev, macb_dma_desc_get_size(bp),
- bp->rx_ring_tieoff, bp->rx_ring_tieoff_dma);
- bp->rx_ring_tieoff = NULL;
- }
-
bp->macbgem_ops.mog_free_rx_buffers(bp);
size = bp->num_queues * macb_tx_ring_size_per_queue(bp);
@@ -2782,16 +2776,6 @@ static int macb_alloc_consistent(struct macb *bp)
if (bp->macbgem_ops.mog_alloc_rx_buffers(bp))
goto out_err;
- /* Required for tie off descriptor for PM cases */
- if (!(bp->caps & MACB_CAPS_QUEUE_DISABLE)) {
- bp->rx_ring_tieoff = dma_alloc_coherent(&bp->pdev->dev,
- macb_dma_desc_get_size(bp),
- &bp->rx_ring_tieoff_dma,
- GFP_KERNEL);
- if (!bp->rx_ring_tieoff)
- goto out_err;
- }
-
return 0;
out_err:
@@ -2799,19 +2783,6 @@ static int macb_alloc_consistent(struct macb *bp)
return -ENOMEM;
}
-static void macb_init_tieoff(struct macb *bp)
-{
- struct macb_dma_desc *desc = bp->rx_ring_tieoff;
-
- if (bp->caps & MACB_CAPS_QUEUE_DISABLE)
- return;
- /* Setup a wrapping descriptor with no free slots
- * (WRAP and USED) to tie off/disable unused RX queues.
- */
- macb_set_addr(bp, desc, MACB_BIT(RX_WRAP) | MACB_BIT(RX_USED));
- desc->ctrl = 0;
-}
-
static void gem_init_rx_ring(struct macb_queue *queue)
{
queue->rx_tail = 0;
@@ -2839,8 +2810,6 @@ static void gem_init_rings(struct macb *bp)
gem_init_rx_ring(queue);
}
-
- macb_init_tieoff(bp);
}
static void macb_init_rings(struct macb *bp)
@@ -2858,8 +2827,6 @@ static void macb_init_rings(struct macb *bp)
bp->queues[0].tx_head = 0;
bp->queues[0].tx_tail = 0;
desc->ctrl |= MACB_BIT(TX_WRAP);
-
- macb_init_tieoff(bp);
}
static void macb_reset_hw(struct macb *bp)
@@ -5530,6 +5497,33 @@ static int eyeq5_init(struct platform_device *pdev)
return ret;
}
+static int macb_alloc_tieoff(struct macb *bp)
+{
+ /* Tieoff is a workaround in case HW cannot disable queues, for PM. */
+ if (bp->caps & MACB_CAPS_QUEUE_DISABLE)
+ return 0;
+
+ bp->rx_ring_tieoff = dma_alloc_coherent(&bp->pdev->dev,
+ macb_dma_desc_get_size(bp),
+ &bp->rx_ring_tieoff_dma,
+ GFP_KERNEL);
+ if (!bp->rx_ring_tieoff)
+ return -ENOMEM;
+
+ return 0;
+}
+
+static void macb_free_tieoff(struct macb *bp)
+{
+ if (!bp->rx_ring_tieoff)
+ return;
+
+ dma_free_coherent(&bp->pdev->dev, macb_dma_desc_get_size(bp),
+ bp->rx_ring_tieoff,
+ bp->rx_ring_tieoff_dma);
+ bp->rx_ring_tieoff = NULL;
+}
+
static const struct macb_usrio_config at91_default_usrio = {
.mii = MACB_BIT(MII),
.rmii = MACB_BIT(RMII),
@@ -5946,10 +5940,14 @@ static int macb_probe(struct platform_device *pdev)
netif_carrier_off(netdev);
+ err = macb_alloc_tieoff(bp);
+ if (err)
+ goto err_out_unregister_mdio;
+
err = register_netdev(netdev);
if (err) {
dev_err(&pdev->dev, "Cannot register net device, aborting.\n");
- goto err_out_unregister_mdio;
+ goto err_out_free_tieoff;
}
INIT_WORK(&bp->hresp_err_bh_work, macb_hresp_error_task);
@@ -5963,6 +5961,9 @@ static int macb_probe(struct platform_device *pdev)
return 0;
+err_out_free_tieoff:
+ macb_free_tieoff(bp);
+
err_out_unregister_mdio:
mdiobus_unregister(bp->mii_bus);
mdiobus_free(bp->mii_bus);
@@ -5992,6 +5993,7 @@ static void macb_remove(struct platform_device *pdev)
if (netdev) {
bp = netdev_priv(netdev);
unregister_netdev(netdev);
+ macb_free_tieoff(bp);
phy_exit(bp->phy);
mdiobus_unregister(bp->mii_bus);
mdiobus_free(bp->mii_bus);
--
2.53.0
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH net-next 06/11] net: macb: introduce macb_context struct for buffer management
2026-04-01 16:39 [PATCH net-next 00/11] net: macb: implement context swapping Théo Lebrun
` (4 preceding siblings ...)
2026-04-01 16:39 ` [PATCH net-next 05/11] net: macb: allocate tieoff descriptor once across device lifetime Théo Lebrun
@ 2026-04-01 16:39 ` Théo Lebrun
2026-04-02 11:22 ` Nicolai Buchwitz
2026-04-01 16:39 ` [PATCH net-next 07/11] net: macb: avoid macb_init_rx_buffer_size() modifying state Théo Lebrun
` (5 subsequent siblings)
11 siblings, 1 reply; 24+ messages in thread
From: Théo Lebrun @ 2026-04-01 16:39 UTC (permalink / raw)
To: Nicolas Ferre, Claudiu Beznea, Andrew Lunn, David S. Miller,
Eric Dumazet, Jakub Kicinski, Paolo Abeni, Richard Cochran,
Russell King
Cc: Paolo Valerio, Conor Dooley, Nicolai Buchwitz,
Vladimir Kondratiev, Gregory CLEMENT, Benoît Monin,
Tawfik Bayouk, Thomas Petazzoni, Maxime Chevallier, netdev,
linux-kernel, Théo Lebrun
Whenever an operation requires buffer realloc, we close the interface,
update parameters and reopen. To improve reliability under memory
pressure, we should rather alloc new buffers, reconfigure HW and free
old buffers. This requires MACB to support having multiple "contexts"
in parallel.
Introduce this concept by adding the macb_context struct, which owns all
queue buffers and the parameters associated. We do not yet support
multiple contexts in parallel, because all functions access bp->ctx
(the currently active context) directly.
Steps:
- Introduce `struct macb_context` and its children `struct macb_rxq`
and `struct macb_txq`. Context fields are stolen from `struct macb`
and rxq/txq fields are from `struct macb_queue`.
Making it two separate structs per queue simplifies accesses: we grab
a txq/rxq local variable and access fields like txq->head instead of
queue->tx_head. It also anecdotally improves data locality.
- macb_init_dflt() does not set bp->ctx->{rx,tx}_ring_size to default
values as ctx is not allocated yet. Instead, introduce
bp->configured_{rx,tx}_ring_size which get updated on user requests.
- macb_open() starts by allocating bp->ctx. It gets freed in the
open error codepath or by macb_close().
- Guided by compile errors, update all codepaths. Most diff is changing
`queue->tx_*` to `txq->*` and `queue->rx_*` to `rxq->*`, with a new
local variable. Also rx_buffer_size / rx_ring_size / tx_ring_size
move from bp to bp->ctx.
Introduce two helpers macb_tx|rx() functions to convert macb_queue
pointers.
Signed-off-by: Théo Lebrun <theo.lebrun@bootlin.com>
---
drivers/net/ethernet/cadence/macb.h | 49 ++--
drivers/net/ethernet/cadence/macb_main.c | 442 ++++++++++++++++++-------------
2 files changed, 296 insertions(+), 195 deletions(-)
diff --git a/drivers/net/ethernet/cadence/macb.h b/drivers/net/ethernet/cadence/macb.h
index d6dd1d356e12..8821205e8875 100644
--- a/drivers/net/ethernet/cadence/macb.h
+++ b/drivers/net/ethernet/cadence/macb.h
@@ -1272,21 +1272,10 @@ struct macb_queue {
/* Lock to protect tx_head and tx_tail */
spinlock_t tx_ptr_lock;
- unsigned int tx_head, tx_tail;
- struct macb_dma_desc *tx_ring;
- struct macb_tx_skb *tx_skb;
- dma_addr_t tx_ring_dma;
struct work_struct tx_error_task;
bool txubr_pending;
struct napi_struct napi_tx;
- dma_addr_t rx_ring_dma;
- dma_addr_t rx_buffers_dma;
- unsigned int rx_tail;
- unsigned int rx_prepared_head;
- struct macb_dma_desc *rx_ring;
- struct sk_buff **rx_skbuff;
- void *rx_buffers;
struct napi_struct napi_rx;
struct queue_stats stats;
};
@@ -1301,6 +1290,32 @@ struct ethtool_rx_fs_list {
unsigned int count;
};
+struct macb_rxq {
+ struct macb_dma_desc *ring; /* MACB & GEM */
+ dma_addr_t ring_dma; /* MACB & GEM */
+ unsigned int tail; /* MACB & GEM */
+ unsigned int prepared_head; /* GEM */
+ struct sk_buff **skbuff; /* GEM */
+ dma_addr_t buffers_dma; /* MACB */
+ void *buffers; /* MACB */
+};
+
+struct macb_txq {
+ unsigned int head;
+ unsigned int tail;
+ struct macb_dma_desc *ring;
+ dma_addr_t ring_dma;
+ struct macb_tx_skb *skb;
+};
+
+struct macb_context {
+ unsigned int rx_buffer_size;
+ unsigned int rx_ring_size;
+ unsigned int tx_ring_size;
+ struct macb_rxq rxq[MACB_MAX_QUEUES];
+ struct macb_txq txq[MACB_MAX_QUEUES];
+};
+
struct macb {
void __iomem *regs;
bool native_io;
@@ -1309,12 +1324,16 @@ struct macb {
u32 (*macb_reg_readl)(struct macb *bp, int offset);
void (*macb_reg_writel)(struct macb *bp, int offset, u32 value);
+ /*
+ * Context stores all its parameters.
+ * But we must remember them across closure.
+ */
+ unsigned int configured_rx_ring_size;
+ unsigned int configured_tx_ring_size;
+ struct macb_context *ctx;
+
struct macb_dma_desc *rx_ring_tieoff;
dma_addr_t rx_ring_tieoff_dma;
- size_t rx_buffer_size;
-
- unsigned int rx_ring_size;
- unsigned int tx_ring_size;
unsigned int num_queues;
struct macb_queue queues[MACB_MAX_QUEUES];
diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
index d5023fdc0756..0f63d9b89c11 100644
--- a/drivers/net/ethernet/cadence/macb_main.c
+++ b/drivers/net/ethernet/cadence/macb_main.c
@@ -61,7 +61,7 @@ struct sifive_fu540_macb_mgmt {
#define MAX_TX_RING_SIZE 4096
/* level of occupied TX descriptors under which we wake up TX process */
-#define MACB_TX_WAKEUP_THRESH(bp) (3 * (bp)->tx_ring_size / 4)
+#define MACB_TX_WAKEUP_THRESH(bp) (3 * (bp)->ctx->tx_ring_size / 4)
#define MACB_RX_INT_FLAGS (MACB_BIT(RCOMP) | MACB_BIT(ISR_ROVR))
#define MACB_TX_ERR_FLAGS (MACB_BIT(ISR_TUND) \
@@ -148,48 +148,73 @@ static struct macb_dma_desc_64 *macb_64b_desc(struct macb *bp, struct macb_dma_d
/* Ring buffer accessors */
static unsigned int macb_tx_ring_wrap(struct macb *bp, unsigned int index)
{
- return index & (bp->tx_ring_size - 1);
+ return index & (bp->ctx->tx_ring_size - 1);
+}
+
+static struct macb_txq *macb_txq(struct macb_queue *queue)
+{
+ struct macb *bp = queue->bp;
+ unsigned int q = queue - bp->queues;
+
+ return &bp->ctx->txq[q];
+}
+
+static struct macb_rxq *macb_rxq(struct macb_queue *queue)
+{
+ struct macb *bp = queue->bp;
+ unsigned int q = queue - bp->queues;
+
+ return &bp->ctx->rxq[q];
}
static struct macb_dma_desc *macb_tx_desc(struct macb_queue *queue,
unsigned int index)
{
+ struct macb_txq *txq = macb_txq(queue);
+
index = macb_tx_ring_wrap(queue->bp, index);
index = macb_adj_dma_desc_idx(queue->bp, index);
- return &queue->tx_ring[index];
+ return &txq->ring[index];
}
static struct macb_tx_skb *macb_tx_skb(struct macb_queue *queue,
unsigned int index)
{
- return &queue->tx_skb[macb_tx_ring_wrap(queue->bp, index)];
+ struct macb_txq *txq = macb_txq(queue);
+
+ return &txq->skb[macb_tx_ring_wrap(queue->bp, index)];
}
static dma_addr_t macb_tx_dma(struct macb_queue *queue, unsigned int index)
{
+ struct macb_txq *txq = macb_txq(queue);
dma_addr_t offset;
offset = macb_tx_ring_wrap(queue->bp, index) *
macb_dma_desc_get_size(queue->bp);
- return queue->tx_ring_dma + offset;
+ return txq->ring_dma + offset;
}
static unsigned int macb_rx_ring_wrap(struct macb *bp, unsigned int index)
{
- return index & (bp->rx_ring_size - 1);
+ return index & (bp->ctx->rx_ring_size - 1);
}
static struct macb_dma_desc *macb_rx_desc(struct macb_queue *queue, unsigned int index)
{
+ struct macb_rxq *rxq = macb_rxq(queue);
+
index = macb_rx_ring_wrap(queue->bp, index);
index = macb_adj_dma_desc_idx(queue->bp, index);
- return &queue->rx_ring[index];
+ return &rxq->ring[index];
}
static void *macb_rx_buffer(struct macb_queue *queue, unsigned int index)
{
- return queue->rx_buffers + queue->bp->rx_buffer_size *
+ struct macb_rxq *rxq = macb_rxq(queue);
+
+ return rxq->buffers + queue->bp->ctx->rx_buffer_size *
macb_rx_ring_wrap(queue->bp, index);
}
@@ -459,19 +484,23 @@ static int macb_mdio_write_c45(struct mii_bus *bus, int mii_id,
static void macb_init_buffers(struct macb *bp)
{
struct macb_queue *queue;
+ struct macb_rxq *rxq;
+ struct macb_txq *txq;
unsigned int q;
/* Single register for all queues' high 32 bits. */
if (macb_dma64(bp)) {
- macb_writel(bp, RBQPH,
- upper_32_bits(bp->queues[0].rx_ring_dma));
- macb_writel(bp, TBQPH,
- upper_32_bits(bp->queues[0].tx_ring_dma));
+ rxq = &bp->ctx->rxq[0];
+ txq = &bp->ctx->txq[0];
+ macb_writel(bp, RBQPH, upper_32_bits(rxq->ring_dma));
+ macb_writel(bp, TBQPH, upper_32_bits(txq->ring_dma));
}
for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) {
- queue_writel(queue, RBQP, lower_32_bits(queue->rx_ring_dma));
- queue_writel(queue, TBQP, lower_32_bits(queue->tx_ring_dma));
+ rxq = &bp->ctx->rxq[q];
+ txq = &bp->ctx->txq[q];
+ queue_writel(queue, RBQP, lower_32_bits(rxq->ring_dma));
+ queue_writel(queue, TBQP, lower_32_bits(txq->ring_dma));
}
}
@@ -644,11 +673,12 @@ static bool macb_tx_lpi_set(struct macb *bp, bool enable)
static bool macb_tx_all_queues_idle(struct macb *bp)
{
- struct macb_queue *queue;
+ struct macb_txq *txq;
unsigned int q;
- for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) {
- if (READ_ONCE(queue->tx_head) != READ_ONCE(queue->tx_tail))
+ for (q = 0; q < bp->num_queues; ++q) {
+ txq = &bp->ctx->txq[q];
+ if (READ_ONCE(txq->head) != READ_ONCE(txq->tail))
return false;
}
return true;
@@ -795,6 +825,7 @@ static void gem_shuffle_tx_one_ring(struct macb_queue *queue)
struct macb_tx_skb tx_skb, *skb_curr, *skb_next;
struct macb_dma_desc *desc_curr, *desc_next;
unsigned int i, cycles, shift, curr, next;
+ struct macb_txq *txq = macb_txq(queue);
struct macb *bp = queue->bp;
unsigned char desc[24];
unsigned long flags;
@@ -805,17 +836,17 @@ static void gem_shuffle_tx_one_ring(struct macb_queue *queue)
return;
spin_lock_irqsave(&queue->tx_ptr_lock, flags);
- head = queue->tx_head;
- tail = queue->tx_tail;
- ring_size = bp->tx_ring_size;
+ head = txq->head;
+ tail = txq->tail;
+ ring_size = bp->ctx->tx_ring_size;
count = CIRC_CNT(head, tail, ring_size);
if (!(tail % ring_size))
goto unlock;
if (!count) {
- queue->tx_head = 0;
- queue->tx_tail = 0;
+ txq->head = 0;
+ txq->tail = 0;
goto unlock;
}
@@ -859,8 +890,8 @@ static void gem_shuffle_tx_one_ring(struct macb_queue *queue)
sizeof(struct macb_tx_skb));
}
- queue->tx_head = count;
- queue->tx_tail = 0;
+ txq->head = count;
+ txq->tail = 0;
/* Make descriptor updates visible to hardware */
wmb();
@@ -1253,6 +1284,7 @@ static void macb_tx_error_task(struct work_struct *work)
struct macb_queue *queue = container_of(work, struct macb_queue,
tx_error_task);
unsigned int q = queue - queue->bp->queues;
+ struct macb_txq *txq = macb_txq(queue);
struct macb *bp = queue->bp;
struct macb_tx_skb *tx_skb;
struct macb_dma_desc *desc;
@@ -1264,7 +1296,7 @@ static void macb_tx_error_task(struct work_struct *work)
u32 bytes = 0;
netdev_vdbg(bp->netdev, "macb_tx_error_task: q = %u, t = %u, h = %u\n",
- q, queue->tx_tail, queue->tx_head);
+ q, txq->tail, txq->head);
/* Prevent the queue NAPI TX poll from running, as it calls
* macb_tx_complete(), which in turn may call netif_wake_subqueue().
@@ -1291,7 +1323,7 @@ static void macb_tx_error_task(struct work_struct *work)
/* Treat frames in TX queue including the ones that caused the error.
* Free transmit buffers in upper layer.
*/
- for (tail = queue->tx_tail; tail != queue->tx_head; tail++) {
+ for (tail = txq->tail; tail != txq->head; tail++) {
u32 ctrl;
desc = macb_tx_desc(queue, tail);
@@ -1349,10 +1381,10 @@ static void macb_tx_error_task(struct work_struct *work)
wmb();
/* Reinitialize the TX desc queue */
- queue_writel(queue, TBQP, lower_32_bits(queue->tx_ring_dma));
+ queue_writel(queue, TBQP, lower_32_bits(txq->ring_dma));
/* Make TX ring reflect state of hardware */
- queue->tx_head = 0;
- queue->tx_tail = 0;
+ txq->head = 0;
+ txq->tail = 0;
/* Housework before enabling TX IRQ */
macb_writel(bp, TSR, macb_readl(bp, TSR));
@@ -1402,6 +1434,7 @@ static bool ptp_one_step_sync(struct sk_buff *skb)
static int macb_tx_complete(struct macb_queue *queue, int budget)
{
struct macb *bp = queue->bp;
+ struct macb_txq *txq = macb_txq(queue);
unsigned int q = queue - bp->queues;
unsigned long flags;
unsigned int tail;
@@ -1410,8 +1443,8 @@ static int macb_tx_complete(struct macb_queue *queue, int budget)
u32 bytes = 0;
spin_lock_irqsave(&queue->tx_ptr_lock, flags);
- head = queue->tx_head;
- for (tail = queue->tx_tail; tail != head && packets < budget; tail++) {
+ head = txq->head;
+ for (tail = txq->tail; tail != head && packets < budget; tail++) {
struct macb_tx_skb *tx_skb;
struct sk_buff *skb;
struct macb_dma_desc *desc;
@@ -1467,10 +1500,10 @@ static int macb_tx_complete(struct macb_queue *queue, int budget)
netdev_tx_completed_queue(netdev_get_tx_queue(bp->netdev, q),
packets, bytes);
- queue->tx_tail = tail;
+ txq->tail = tail;
if (__netif_subqueue_stopped(bp->netdev, q) &&
- CIRC_CNT(queue->tx_head, queue->tx_tail,
- bp->tx_ring_size) <= MACB_TX_WAKEUP_THRESH(bp))
+ CIRC_CNT(txq->head, txq->tail,
+ bp->ctx->tx_ring_size) <= MACB_TX_WAKEUP_THRESH(bp))
netif_wake_subqueue(bp->netdev, q);
spin_unlock_irqrestore(&queue->tx_ptr_lock, flags);
@@ -1482,24 +1515,26 @@ static int macb_tx_complete(struct macb_queue *queue, int budget)
static void gem_rx_refill(struct macb_queue *queue)
{
+ struct macb_rxq *rxq = macb_rxq(queue);
struct macb *bp = queue->bp;
struct macb_dma_desc *desc;
struct sk_buff *skb;
unsigned int entry;
dma_addr_t paddr;
- while (CIRC_SPACE(queue->rx_prepared_head, queue->rx_tail,
- bp->rx_ring_size) > 0) {
- entry = macb_rx_ring_wrap(bp, queue->rx_prepared_head);
+ while (CIRC_SPACE(rxq->prepared_head, rxq->tail,
+ bp->ctx->rx_ring_size) > 0) {
+ entry = macb_rx_ring_wrap(bp, rxq->prepared_head);
/* Make hw descriptor updates visible to CPU */
rmb();
desc = macb_rx_desc(queue, entry);
- if (!queue->rx_skbuff[entry]) {
+ if (!rxq->skbuff[entry]) {
/* allocate sk_buff for this free entry in ring */
- skb = netdev_alloc_skb(bp->netdev, bp->rx_buffer_size);
+ skb = netdev_alloc_skb(bp->netdev,
+ bp->ctx->rx_buffer_size);
if (unlikely(!skb)) {
netdev_err(bp->netdev,
"Unable to allocate sk_buff\n");
@@ -1508,16 +1543,16 @@ static void gem_rx_refill(struct macb_queue *queue)
/* now fill corresponding descriptor entry */
paddr = dma_map_single(&bp->pdev->dev, skb->data,
- bp->rx_buffer_size,
+ bp->ctx->rx_buffer_size,
DMA_FROM_DEVICE);
if (dma_mapping_error(&bp->pdev->dev, paddr)) {
dev_kfree_skb(skb);
break;
}
- queue->rx_skbuff[entry] = skb;
+ rxq->skbuff[entry] = skb;
- if (entry == bp->rx_ring_size - 1)
+ if (entry == bp->ctx->rx_ring_size - 1)
paddr |= MACB_BIT(RX_WRAP);
desc->ctrl = 0;
/* Setting addr clears RX_USED and allows reception,
@@ -1544,14 +1579,14 @@ static void gem_rx_refill(struct macb_queue *queue)
dma_wmb();
desc->addr &= ~MACB_BIT(RX_USED);
}
- queue->rx_prepared_head++;
+ rxq->prepared_head++;
}
/* Make descriptor updates visible to hardware */
wmb();
netdev_vdbg(bp->netdev, "rx ring: queue: %p, prepared head %d, tail %d\n",
- queue, queue->rx_prepared_head, queue->rx_tail);
+ queue, rxq->prepared_head, rxq->tail);
}
/* Mark DMA descriptors from begin up to and not including end as unused */
@@ -1578,6 +1613,7 @@ static void discard_partial_frame(struct macb_queue *queue, unsigned int begin,
static int gem_rx(struct macb_queue *queue, struct napi_struct *napi,
int budget)
{
+ struct macb_rxq *rxq = macb_rxq(queue);
struct macb *bp = queue->bp;
struct macb_dma_desc *desc;
struct sk_buff *skb;
@@ -1590,7 +1626,7 @@ static int gem_rx(struct macb_queue *queue, struct napi_struct *napi,
dma_addr_t addr;
bool rxused;
- entry = macb_rx_ring_wrap(bp, queue->rx_tail);
+ entry = macb_rx_ring_wrap(bp, rxq->tail);
desc = macb_rx_desc(queue, entry);
/* Make hw descriptor updates visible to CPU */
@@ -1607,7 +1643,7 @@ static int gem_rx(struct macb_queue *queue, struct napi_struct *napi,
ctrl = desc->ctrl;
- queue->rx_tail++;
+ rxq->tail++;
count++;
if (!(ctrl & MACB_BIT(RX_SOF) && ctrl & MACB_BIT(RX_EOF))) {
@@ -1617,7 +1653,7 @@ static int gem_rx(struct macb_queue *queue, struct napi_struct *napi,
queue->stats.rx_dropped++;
break;
}
- skb = queue->rx_skbuff[entry];
+ skb = rxq->skbuff[entry];
if (unlikely(!skb)) {
netdev_err(bp->netdev,
"inconsistent Rx descriptor chain\n");
@@ -1626,14 +1662,14 @@ static int gem_rx(struct macb_queue *queue, struct napi_struct *napi,
break;
}
/* now everything is ready for receiving packet */
- queue->rx_skbuff[entry] = NULL;
+ rxq->skbuff[entry] = NULL;
len = ctrl & bp->rx_frm_len_mask;
netdev_vdbg(bp->netdev, "gem_rx %u (len %u)\n", entry, len);
skb_put(skb, len);
dma_unmap_single(&bp->pdev->dev, addr,
- bp->rx_buffer_size, DMA_FROM_DEVICE);
+ bp->ctx->rx_buffer_size, DMA_FROM_DEVICE);
skb->protocol = eth_type_trans(skb, bp->netdev);
skb_checksum_none_assert(skb);
@@ -1713,7 +1749,7 @@ static int macb_rx_frame(struct macb_queue *queue, struct napi_struct *napi,
skb_put(skb, len);
for (frag = first_frag; ; frag++) {
- unsigned int frag_len = bp->rx_buffer_size;
+ unsigned int frag_len = bp->ctx->rx_buffer_size;
if (offset + frag_len > len) {
if (unlikely(frag != last_frag)) {
@@ -1725,7 +1761,7 @@ static int macb_rx_frame(struct macb_queue *queue, struct napi_struct *napi,
skb_copy_to_linear_data_offset(skb, offset,
macb_rx_buffer(queue, frag),
frag_len);
- offset += bp->rx_buffer_size;
+ offset += bp->ctx->rx_buffer_size;
desc = macb_rx_desc(queue, frag);
desc->addr &= ~MACB_BIT(RX_USED);
@@ -1750,32 +1786,34 @@ static int macb_rx_frame(struct macb_queue *queue, struct napi_struct *napi,
static inline void macb_init_rx_ring(struct macb_queue *queue)
{
+ struct macb_rxq *rxq = macb_rxq(queue);
struct macb_dma_desc *desc = NULL;
struct macb *bp = queue->bp;
dma_addr_t addr;
int i;
- addr = queue->rx_buffers_dma;
- for (i = 0; i < bp->rx_ring_size; i++) {
+ addr = rxq->buffers_dma;
+ for (i = 0; i < bp->ctx->rx_ring_size; i++) {
desc = macb_rx_desc(queue, i);
macb_set_addr(bp, desc, addr);
desc->ctrl = 0;
- addr += bp->rx_buffer_size;
+ addr += bp->ctx->rx_buffer_size;
}
desc->addr |= MACB_BIT(RX_WRAP);
- queue->rx_tail = 0;
+ rxq->tail = 0;
}
static int macb_rx(struct macb_queue *queue, struct napi_struct *napi,
int budget)
{
+ struct macb_rxq *rxq = macb_rxq(queue);
struct macb *bp = queue->bp;
bool reset_rx_queue = false;
int first_frag = -1;
unsigned int tail;
int received = 0;
- for (tail = queue->rx_tail; budget > 0; tail++) {
+ for (tail = rxq->tail; budget > 0; tail++) {
struct macb_dma_desc *desc = macb_rx_desc(queue, tail);
u32 ctrl;
@@ -1829,7 +1867,7 @@ static int macb_rx(struct macb_queue *queue, struct napi_struct *napi,
macb_writel(bp, NCR, ctrl & ~MACB_BIT(RE));
macb_init_rx_ring(queue);
- queue_writel(queue, RBQP, queue->rx_ring_dma);
+ queue_writel(queue, RBQP, rxq->ring_dma);
macb_writel(bp, NCR, ctrl | MACB_BIT(RE));
@@ -1838,20 +1876,21 @@ static int macb_rx(struct macb_queue *queue, struct napi_struct *napi,
}
if (first_frag != -1)
- queue->rx_tail = first_frag;
+ rxq->tail = first_frag;
else
- queue->rx_tail = tail;
+ rxq->tail = tail;
return received;
}
static bool macb_rx_pending(struct macb_queue *queue)
{
+ struct macb_rxq *rxq = macb_rxq(queue);
struct macb *bp = queue->bp;
struct macb_dma_desc *desc;
unsigned int entry;
- entry = macb_rx_ring_wrap(bp, queue->rx_tail);
+ entry = macb_rx_ring_wrap(bp, rxq->tail);
desc = macb_rx_desc(queue, entry);
/* Make hw descriptor updates visible to CPU */
@@ -1900,18 +1939,19 @@ static int macb_rx_poll(struct napi_struct *napi, int budget)
static void macb_tx_restart(struct macb_queue *queue)
{
+ struct macb_txq *txq = macb_txq(queue);
struct macb *bp = queue->bp;
unsigned int head_idx, tbqp;
unsigned long flags;
spin_lock_irqsave(&queue->tx_ptr_lock, flags);
- if (queue->tx_head == queue->tx_tail)
+ if (txq->head == txq->tail)
goto out_tx_ptr_unlock;
tbqp = queue_readl(queue, TBQP) / macb_dma_desc_get_size(bp);
tbqp = macb_adj_dma_desc_idx(bp, macb_tx_ring_wrap(bp, tbqp));
- head_idx = macb_adj_dma_desc_idx(bp, macb_tx_ring_wrap(bp, queue->tx_head));
+ head_idx = macb_adj_dma_desc_idx(bp, macb_tx_ring_wrap(bp, txq->head));
if (tbqp == head_idx)
goto out_tx_ptr_unlock;
@@ -1926,15 +1966,16 @@ static void macb_tx_restart(struct macb_queue *queue)
static bool macb_tx_complete_pending(struct macb_queue *queue)
{
+ struct macb_txq *txq = macb_txq(queue);
bool retval = false;
unsigned long flags;
spin_lock_irqsave(&queue->tx_ptr_lock, flags);
- if (queue->tx_head != queue->tx_tail) {
+ if (txq->head != txq->tail) {
/* Make hw descriptor updates visible to CPU */
rmb();
- if (macb_tx_desc(queue, queue->tx_tail)->ctrl & MACB_BIT(TX_USED))
+ if (macb_tx_desc(queue, txq->tail)->ctrl & MACB_BIT(TX_USED))
retval = true;
}
spin_unlock_irqrestore(&queue->tx_ptr_lock, flags);
@@ -2225,8 +2266,9 @@ static unsigned int macb_tx_map(struct macb *bp,
struct sk_buff *skb,
unsigned int hdrlen)
{
+ struct macb_txq *txq = macb_txq(queue);
unsigned int f, nr_frags = skb_shinfo(skb)->nr_frags;
- unsigned int len, i, tx_head = queue->tx_head;
+ unsigned int len, i, tx_head = txq->head;
u32 ctrl, lso_ctrl = 0, seq_ctrl = 0;
unsigned int eof = 1, mss_mfs = 0;
struct macb_tx_skb *tx_skb = NULL;
@@ -2346,11 +2388,12 @@ static unsigned int macb_tx_map(struct macb *bp,
ctrl |= MACB_BIT(TX_LAST);
eof = 0;
}
- if (unlikely(macb_tx_ring_wrap(bp, i) == bp->tx_ring_size - 1))
+ if (unlikely(macb_tx_ring_wrap(bp, i) ==
+ bp->ctx->tx_ring_size - 1))
ctrl |= MACB_BIT(TX_WRAP);
/* First descriptor is header descriptor */
- if (i == queue->tx_head) {
+ if (i == txq->head) {
ctrl |= MACB_BF(TX_LSO, lso_ctrl);
ctrl |= MACB_BF(TX_TCP_SEQ_SRC, seq_ctrl);
if ((bp->netdev->features & NETIF_F_HW_CSUM) &&
@@ -2370,16 +2413,16 @@ static unsigned int macb_tx_map(struct macb *bp,
*/
wmb();
desc->ctrl = ctrl;
- } while (i != queue->tx_head);
+ } while (i != txq->head);
- queue->tx_head = tx_head;
+ txq->head = tx_head;
return 0;
dma_error:
netdev_err(bp->netdev, "TX DMA map failed\n");
- for (i = queue->tx_head; i != tx_head; i++) {
+ for (i = txq->head; i != tx_head; i++) {
tx_skb = macb_tx_skb(queue, i);
macb_tx_unmap(bp, tx_skb, 0);
@@ -2499,6 +2542,7 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *skb,
unsigned int q = skb_get_queue_mapping(skb);
unsigned int desc_cnt, nr_frags, frag_size, f;
struct macb_queue *queue = &bp->queues[q];
+ struct macb_txq *txq = macb_txq(queue);
netdev_tx_t ret = NETDEV_TX_OK;
unsigned int hdrlen;
unsigned long flags;
@@ -2562,11 +2606,11 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *skb,
spin_lock_irqsave(&queue->tx_ptr_lock, flags);
/* This is a hard error, log it. */
- if (CIRC_SPACE(queue->tx_head, queue->tx_tail,
- bp->tx_ring_size) < desc_cnt) {
+ if (CIRC_SPACE(txq->head, txq->tail,
+ bp->ctx->tx_ring_size) < desc_cnt) {
netif_stop_subqueue(netdev, q);
netdev_dbg(netdev, "tx_head = %u, tx_tail = %u\n",
- queue->tx_head, queue->tx_tail);
+ txq->head, txq->tail);
ret = NETDEV_TX_BUSY;
goto unlock;
}
@@ -2588,7 +2632,7 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *skb,
macb_writel(bp, NCR, macb_readl(bp, NCR) | MACB_BIT(TSTART));
spin_unlock(&bp->lock);
- if (CIRC_SPACE(queue->tx_head, queue->tx_tail, bp->tx_ring_size) < 1)
+ if (CIRC_SPACE(txq->head, txq->tail, bp->ctx->tx_ring_size) < 1)
netif_stop_subqueue(netdev, q);
unlock:
@@ -2600,38 +2644,42 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *skb,
static void macb_init_rx_buffer_size(struct macb *bp, size_t size)
{
if (!macb_is_gem(bp)) {
- bp->rx_buffer_size = MACB_RX_BUFFER_SIZE;
+ bp->ctx->rx_buffer_size = MACB_RX_BUFFER_SIZE;
} else {
- bp->rx_buffer_size = MIN(size, RX_BUFFER_MAX);
+ bp->ctx->rx_buffer_size = MIN(size, RX_BUFFER_MAX);
- if (bp->rx_buffer_size % RX_BUFFER_MULTIPLE) {
+ if (bp->ctx->rx_buffer_size % RX_BUFFER_MULTIPLE) {
netdev_dbg(bp->netdev,
"RX buffer must be multiple of %d bytes, expanding\n",
RX_BUFFER_MULTIPLE);
- bp->rx_buffer_size =
- roundup(bp->rx_buffer_size, RX_BUFFER_MULTIPLE);
+ bp->ctx->rx_buffer_size =
+ roundup(bp->ctx->rx_buffer_size,
+ RX_BUFFER_MULTIPLE);
}
}
- netdev_dbg(bp->netdev, "mtu [%u] rx_buffer_size [%zu]\n",
- bp->netdev->mtu, bp->rx_buffer_size);
+ netdev_dbg(bp->netdev, "mtu [%u] rx_buffer_size [%u]\n",
+ bp->netdev->mtu, bp->ctx->rx_buffer_size);
}
static void gem_free_rx_buffers(struct macb *bp)
{
- struct sk_buff *skb;
- struct macb_dma_desc *desc;
+ struct macb_dma_desc *desc;
struct macb_queue *queue;
- dma_addr_t addr;
+ struct macb_rxq *rxq;
+ struct sk_buff *skb;
+ dma_addr_t addr;
unsigned int q;
int i;
for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) {
- if (!queue->rx_skbuff)
+ rxq = &bp->ctx->rxq[q];
+
+ if (!rxq->skbuff)
continue;
- for (i = 0; i < bp->rx_ring_size; i++) {
- skb = queue->rx_skbuff[i];
+ for (i = 0; i < bp->ctx->rx_ring_size; i++) {
+ skb = rxq->skbuff[i];
if (!skb)
continue;
@@ -2639,95 +2687,106 @@ static void gem_free_rx_buffers(struct macb *bp)
desc = macb_rx_desc(queue, i);
addr = macb_get_addr(bp, desc);
- dma_unmap_single(&bp->pdev->dev, addr, bp->rx_buffer_size,
- DMA_FROM_DEVICE);
+ dma_unmap_single(&bp->pdev->dev, addr,
+ bp->ctx->rx_buffer_size,
+ DMA_FROM_DEVICE);
dev_kfree_skb_any(skb);
skb = NULL;
}
- kfree(queue->rx_skbuff);
- queue->rx_skbuff = NULL;
+ kfree(rxq->skbuff);
+ rxq->skbuff = NULL;
}
}
static void macb_free_rx_buffers(struct macb *bp)
{
- struct macb_queue *queue = &bp->queues[0];
+ struct macb_rxq *rxq = &bp->ctx->rxq[0];
- if (queue->rx_buffers) {
+ if (rxq->buffers) {
dma_free_coherent(&bp->pdev->dev,
- bp->rx_ring_size * bp->rx_buffer_size,
- queue->rx_buffers, queue->rx_buffers_dma);
- queue->rx_buffers = NULL;
+ bp->ctx->rx_ring_size *
+ bp->ctx->rx_buffer_size,
+ rxq->buffers, rxq->buffers_dma);
+ rxq->buffers = NULL;
}
}
static unsigned int macb_tx_ring_size_per_queue(struct macb *bp)
{
- return macb_dma_desc_get_size(bp) * bp->tx_ring_size + bp->tx_bd_rd_prefetch;
+ return macb_dma_desc_get_size(bp) * bp->ctx->tx_ring_size +
+ bp->tx_bd_rd_prefetch;
}
static unsigned int macb_rx_ring_size_per_queue(struct macb *bp)
{
- return macb_dma_desc_get_size(bp) * bp->rx_ring_size + bp->rx_bd_rd_prefetch;
+ return macb_dma_desc_get_size(bp) * bp->ctx->rx_ring_size +
+ bp->rx_bd_rd_prefetch;
}
static void macb_free_consistent(struct macb *bp)
{
struct device *dev = &bp->pdev->dev;
- struct macb_queue *queue;
+ struct macb_txq *txq;
+ struct macb_rxq *rxq;
unsigned int q;
size_t size;
bp->macbgem_ops.mog_free_rx_buffers(bp);
+ txq = &bp->ctx->txq[0];
size = bp->num_queues * macb_tx_ring_size_per_queue(bp);
- dma_free_coherent(dev, size, bp->queues[0].tx_ring, bp->queues[0].tx_ring_dma);
+ dma_free_coherent(dev, size, txq->ring, txq->ring_dma);
+ rxq = &bp->ctx->rxq[0];
size = bp->num_queues * macb_rx_ring_size_per_queue(bp);
- dma_free_coherent(dev, size, bp->queues[0].rx_ring, bp->queues[0].rx_ring_dma);
+ dma_free_coherent(dev, size, rxq->ring, rxq->ring_dma);
- for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) {
- kfree(queue->tx_skb);
- queue->tx_skb = NULL;
- queue->tx_ring = NULL;
- queue->rx_ring = NULL;
+ for (q = 0; q < bp->num_queues; ++q) {
+ txq = &bp->ctx->txq[q];
+ rxq = &bp->ctx->rxq[q];
+
+ kfree(txq->skb);
+ txq->skb = NULL;
+ txq->ring = NULL;
+ rxq->ring = NULL;
}
}
static int gem_alloc_rx_buffers(struct macb *bp)
{
- struct macb_queue *queue;
+ struct macb_rxq *rxq;
unsigned int q;
int size;
- for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) {
- size = bp->rx_ring_size * sizeof(struct sk_buff *);
- queue->rx_skbuff = kzalloc(size, GFP_KERNEL);
- if (!queue->rx_skbuff)
+ for (q = 0; q < bp->num_queues; ++q) {
+ rxq = &bp->ctx->rxq[q];
+ size = bp->ctx->rx_ring_size * sizeof(struct sk_buff *);
+ rxq->skbuff = kzalloc(size, GFP_KERNEL);
+ if (!rxq->skbuff)
return -ENOMEM;
else
netdev_dbg(bp->netdev,
"Allocated %d RX struct sk_buff entries at %p\n",
- bp->rx_ring_size, queue->rx_skbuff);
+ bp->ctx->rx_ring_size, rxq->skbuff);
}
return 0;
}
static int macb_alloc_rx_buffers(struct macb *bp)
{
- struct macb_queue *queue = &bp->queues[0];
+ struct macb_rxq *rxq = &bp->ctx->rxq[0];
int size;
- size = bp->rx_ring_size * bp->rx_buffer_size;
- queue->rx_buffers = dma_alloc_coherent(&bp->pdev->dev, size,
- &queue->rx_buffers_dma, GFP_KERNEL);
- if (!queue->rx_buffers)
+ size = bp->ctx->rx_ring_size * bp->ctx->rx_buffer_size;
+ rxq->buffers = dma_alloc_coherent(&bp->pdev->dev, size,
+ &rxq->buffers_dma, GFP_KERNEL);
+ if (!rxq->buffers)
return -ENOMEM;
netdev_dbg(bp->netdev,
"Allocated RX buffers of %d bytes at %08lx (mapped %p)\n",
- size, (unsigned long)queue->rx_buffers_dma, queue->rx_buffers);
+ size, (unsigned long)rxq->buffers_dma, rxq->buffers);
return 0;
}
@@ -2735,7 +2794,8 @@ static int macb_alloc_consistent(struct macb *bp)
{
struct device *dev = &bp->pdev->dev;
dma_addr_t tx_dma, rx_dma;
- struct macb_queue *queue;
+ struct macb_txq *txq;
+ struct macb_rxq *rxq;
unsigned int q;
void *tx, *rx;
size_t size;
@@ -2761,16 +2821,19 @@ static int macb_alloc_consistent(struct macb *bp)
netdev_dbg(bp->netdev, "Allocated %zu bytes for %u RX rings at %08lx (mapped %p)\n",
size, bp->num_queues, (unsigned long)rx_dma, rx);
- for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) {
- queue->tx_ring = tx + macb_tx_ring_size_per_queue(bp) * q;
- queue->tx_ring_dma = tx_dma + macb_tx_ring_size_per_queue(bp) * q;
+ for (q = 0; q < bp->num_queues; ++q) {
+ txq = &bp->ctx->txq[q];
+ rxq = &bp->ctx->rxq[q];
- queue->rx_ring = rx + macb_rx_ring_size_per_queue(bp) * q;
- queue->rx_ring_dma = rx_dma + macb_rx_ring_size_per_queue(bp) * q;
+ txq->ring = tx + macb_tx_ring_size_per_queue(bp) * q;
+ txq->ring_dma = tx_dma + macb_tx_ring_size_per_queue(bp) * q;
- size = bp->tx_ring_size * sizeof(struct macb_tx_skb);
- queue->tx_skb = kmalloc(size, GFP_KERNEL);
- if (!queue->tx_skb)
+ rxq->ring = rx + macb_rx_ring_size_per_queue(bp) * q;
+ rxq->ring_dma = rx_dma + macb_rx_ring_size_per_queue(bp) * q;
+
+ size = bp->ctx->tx_ring_size * sizeof(struct macb_tx_skb);
+ txq->skb = kmalloc(size, GFP_KERNEL);
+ if (!txq->skb)
goto out_err;
}
if (bp->macbgem_ops.mog_alloc_rx_buffers(bp))
@@ -2785,8 +2848,10 @@ static int macb_alloc_consistent(struct macb *bp)
static void gem_init_rx_ring(struct macb_queue *queue)
{
- queue->rx_tail = 0;
- queue->rx_prepared_head = 0;
+ struct macb_rxq *rxq = macb_rxq(queue);
+
+ rxq->tail = 0;
+ rxq->prepared_head = 0;
gem_rx_refill(queue);
}
@@ -2795,18 +2860,20 @@ static void gem_init_rings(struct macb *bp)
{
struct macb_queue *queue;
struct macb_dma_desc *desc = NULL;
+ struct macb_txq *txq;
unsigned int q;
int i;
for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) {
- for (i = 0; i < bp->tx_ring_size; i++) {
+ txq = &bp->ctx->txq[q];
+ for (i = 0; i < bp->ctx->tx_ring_size; i++) {
desc = macb_tx_desc(queue, i);
macb_set_addr(bp, desc, 0);
desc->ctrl = MACB_BIT(TX_USED);
}
desc->ctrl |= MACB_BIT(TX_WRAP);
- queue->tx_head = 0;
- queue->tx_tail = 0;
+ txq->head = 0;
+ txq->tail = 0;
gem_init_rx_ring(queue);
}
@@ -2814,18 +2881,19 @@ static void gem_init_rings(struct macb *bp)
static void macb_init_rings(struct macb *bp)
{
- int i;
+ struct macb_txq *txq = &bp->ctx->txq[0];
struct macb_dma_desc *desc = NULL;
+ int i;
macb_init_rx_ring(&bp->queues[0]);
- for (i = 0; i < bp->tx_ring_size; i++) {
+ for (i = 0; i < bp->ctx->tx_ring_size; i++) {
desc = macb_tx_desc(&bp->queues[0], i);
macb_set_addr(bp, desc, 0);
desc->ctrl = MACB_BIT(TX_USED);
}
- bp->queues[0].tx_head = 0;
- bp->queues[0].tx_tail = 0;
+ txq->head = 0;
+ txq->tail = 0;
desc->ctrl |= MACB_BIT(TX_WRAP);
}
@@ -2941,7 +3009,7 @@ static void macb_configure_dma(struct macb *bp)
unsigned int q;
u32 dmacfg;
- buffer_size = bp->rx_buffer_size / RX_BUFFER_MULTIPLE;
+ buffer_size = bp->ctx->rx_buffer_size / RX_BUFFER_MULTIPLE;
if (macb_is_gem(bp)) {
dmacfg = gem_readl(bp, DMACFG) & ~GEM_BF(RXBS, -1L);
for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) {
@@ -3148,14 +3216,22 @@ static int macb_open(struct net_device *netdev)
if (err < 0)
return err;
+ bp->ctx = kzalloc_obj(*bp->ctx);
+ if (!bp->ctx) {
+ err = -ENOMEM;
+ goto pm_exit;
+ }
+
/* RX buffers initialization */
macb_init_rx_buffer_size(bp, bufsz);
+ bp->ctx->rx_ring_size = bp->configured_rx_ring_size;
+ bp->ctx->tx_ring_size = bp->configured_tx_ring_size;
err = macb_alloc_consistent(bp);
if (err) {
netdev_err(netdev, "Unable to allocate DMA memory (error %d)\n",
err);
- goto pm_exit;
+ goto free_ctx;
}
bp->macbgem_ops.mog_init_rings(bp);
@@ -3197,6 +3273,9 @@ static int macb_open(struct net_device *netdev)
napi_disable(&queue->napi_tx);
}
macb_free_consistent(bp);
+free_ctx:
+ kfree(bp->ctx);
+ bp->ctx = NULL;
pm_exit:
pm_runtime_put_sync(&bp->pdev->dev);
return err;
@@ -3230,6 +3309,8 @@ static int macb_close(struct net_device *netdev)
spin_unlock_irqrestore(&bp->lock, flags);
macb_free_consistent(bp);
+ kfree(bp->ctx);
+ bp->ctx = NULL;
if (bp->ptp_info)
bp->ptp_info->ptp_remove(netdev);
@@ -3596,14 +3677,15 @@ static void macb_get_regs(struct net_device *netdev, struct ethtool_regs *regs,
void *p)
{
struct macb *bp = netdev_priv(netdev);
+ struct macb_txq *txq = &bp->ctx->txq[0];
unsigned int tail, head;
u32 *regs_buff = p;
regs->version = (macb_readl(bp, MID) & ((1 << MACB_REV_SIZE) - 1))
| MACB_GREGS_VERSION;
- tail = macb_tx_ring_wrap(bp, bp->queues[0].tx_tail);
- head = macb_tx_ring_wrap(bp, bp->queues[0].tx_head);
+ tail = macb_tx_ring_wrap(bp, txq->tail);
+ head = macb_tx_ring_wrap(bp, txq->head);
regs_buff[0] = macb_readl(bp, NCR);
regs_buff[1] = macb_or_gem_readl(bp, NCFGR);
@@ -3682,8 +3764,8 @@ static void macb_get_ringparam(struct net_device *netdev,
ring->rx_max_pending = MAX_RX_RING_SIZE;
ring->tx_max_pending = MAX_TX_RING_SIZE;
- ring->rx_pending = bp->rx_ring_size;
- ring->tx_pending = bp->tx_ring_size;
+ ring->rx_pending = bp->ctx->rx_ring_size;
+ ring->tx_pending = bp->ctx->tx_ring_size;
}
static int macb_set_ringparam(struct net_device *netdev,
@@ -3706,8 +3788,8 @@ static int macb_set_ringparam(struct net_device *netdev,
MIN_TX_RING_SIZE, MAX_TX_RING_SIZE);
new_tx_size = roundup_pow_of_two(new_tx_size);
- if ((new_tx_size == bp->tx_ring_size) &&
- (new_rx_size == bp->rx_ring_size)) {
+ if (new_tx_size == bp->configured_tx_ring_size &&
+ new_rx_size == bp->configured_rx_ring_size) {
/* nothing to do */
return 0;
}
@@ -3717,8 +3799,8 @@ static int macb_set_ringparam(struct net_device *netdev,
macb_close(bp->netdev);
}
- bp->rx_ring_size = new_rx_size;
- bp->tx_ring_size = new_tx_size;
+ bp->configured_rx_ring_size = new_rx_size;
+ bp->configured_tx_ring_size = new_tx_size;
if (reset)
macb_open(bp->netdev);
@@ -4725,9 +4807,6 @@ static int macb_init_dflt(struct platform_device *pdev)
int err;
u32 val, reg;
- bp->tx_ring_size = DEFAULT_TX_RING_SIZE;
- bp->rx_ring_size = DEFAULT_RX_RING_SIZE;
-
/* set the queue register mapping once for all: queue0 has a special
* register mapping but we don't want to test the queue index then
* compute the corresponding register offset at run time.
@@ -4926,26 +5005,26 @@ static struct sifive_fu540_macb_mgmt *mgmt;
static int at91ether_alloc_coherent(struct macb *bp)
{
- struct macb_queue *queue = &bp->queues[0];
+ struct macb_rxq *rxq = &bp->ctx->rxq[0];
- queue->rx_ring = dma_alloc_coherent(&bp->pdev->dev,
- (AT91ETHER_MAX_RX_DESCR *
- macb_dma_desc_get_size(bp)),
- &queue->rx_ring_dma, GFP_KERNEL);
- if (!queue->rx_ring)
+ rxq->ring = dma_alloc_coherent(&bp->pdev->dev,
+ (AT91ETHER_MAX_RX_DESCR *
+ macb_dma_desc_get_size(bp)),
+ &rxq->ring_dma, GFP_KERNEL);
+ if (!rxq->ring)
return -ENOMEM;
- queue->rx_buffers = dma_alloc_coherent(&bp->pdev->dev,
- AT91ETHER_MAX_RX_DESCR *
- AT91ETHER_MAX_RBUFF_SZ,
- &queue->rx_buffers_dma,
- GFP_KERNEL);
- if (!queue->rx_buffers) {
+ rxq->buffers = dma_alloc_coherent(&bp->pdev->dev,
+ AT91ETHER_MAX_RX_DESCR *
+ AT91ETHER_MAX_RBUFF_SZ,
+ &rxq->buffers_dma,
+ GFP_KERNEL);
+ if (!rxq->buffers) {
dma_free_coherent(&bp->pdev->dev,
AT91ETHER_MAX_RX_DESCR *
macb_dma_desc_get_size(bp),
- queue->rx_ring, queue->rx_ring_dma);
- queue->rx_ring = NULL;
+ rxq->ring, rxq->ring_dma);
+ rxq->ring = NULL;
return -ENOMEM;
}
@@ -4954,22 +5033,22 @@ static int at91ether_alloc_coherent(struct macb *bp)
static void at91ether_free_coherent(struct macb *bp)
{
- struct macb_queue *queue = &bp->queues[0];
+ struct macb_rxq *rxq = &bp->ctx->rxq[0];
- if (queue->rx_ring) {
+ if (rxq->ring) {
dma_free_coherent(&bp->pdev->dev,
AT91ETHER_MAX_RX_DESCR *
macb_dma_desc_get_size(bp),
- queue->rx_ring, queue->rx_ring_dma);
- queue->rx_ring = NULL;
+ rxq->ring, rxq->ring_dma);
+ rxq->ring = NULL;
}
- if (queue->rx_buffers) {
+ if (rxq->buffers) {
dma_free_coherent(&bp->pdev->dev,
AT91ETHER_MAX_RX_DESCR *
AT91ETHER_MAX_RBUFF_SZ,
- queue->rx_buffers, queue->rx_buffers_dma);
- queue->rx_buffers = NULL;
+ rxq->buffers, rxq->buffers_dma);
+ rxq->buffers = NULL;
}
}
@@ -4977,6 +5056,7 @@ static void at91ether_free_coherent(struct macb *bp)
static int at91ether_start(struct macb *bp)
{
struct macb_queue *queue = &bp->queues[0];
+ struct macb_rxq *rxq = &bp->ctx->rxq[0];
struct macb_dma_desc *desc;
dma_addr_t addr;
u32 ctl;
@@ -4986,7 +5066,7 @@ static int at91ether_start(struct macb *bp)
if (ret)
return ret;
- addr = queue->rx_buffers_dma;
+ addr = rxq->buffers_dma;
for (i = 0; i < AT91ETHER_MAX_RX_DESCR; i++) {
desc = macb_rx_desc(queue, i);
macb_set_addr(bp, desc, addr);
@@ -4998,10 +5078,10 @@ static int at91ether_start(struct macb *bp)
desc->addr |= MACB_BIT(RX_WRAP);
/* Reset buffer index */
- queue->rx_tail = 0;
+ rxq->tail = 0;
/* Program address of descriptor list in Rx Buffer Queue register */
- macb_writel(bp, RBQP, queue->rx_ring_dma);
+ macb_writel(bp, RBQP, rxq->ring_dma);
/* Enable Receive and Transmit */
ctl = macb_readl(bp, NCR);
@@ -5139,15 +5219,15 @@ static void at91ether_rx(struct net_device *netdev)
{
struct macb *bp = netdev_priv(netdev);
struct macb_queue *queue = &bp->queues[0];
+ struct macb_rxq *rxq = &bp->ctx->rxq[0];
struct macb_dma_desc *desc;
unsigned char *p_recv;
struct sk_buff *skb;
unsigned int pktlen;
- desc = macb_rx_desc(queue, queue->rx_tail);
+ desc = macb_rx_desc(queue, rxq->tail);
while (desc->addr & MACB_BIT(RX_USED)) {
- p_recv = queue->rx_buffers +
- queue->rx_tail * AT91ETHER_MAX_RBUFF_SZ;
+ p_recv = rxq->buffers + rxq->tail * AT91ETHER_MAX_RBUFF_SZ;
pktlen = MACB_BF(RX_FRMLEN, desc->ctrl);
skb = netdev_alloc_skb(netdev, pktlen + 2);
if (skb) {
@@ -5169,12 +5249,12 @@ static void at91ether_rx(struct net_device *netdev)
desc->addr &= ~MACB_BIT(RX_USED);
/* wrap after last buffer */
- if (queue->rx_tail == AT91ETHER_MAX_RX_DESCR - 1)
- queue->rx_tail = 0;
+ if (rxq->tail == AT91ETHER_MAX_RX_DESCR - 1)
+ rxq->tail = 0;
else
- queue->rx_tail++;
+ rxq->tail++;
- desc = macb_rx_desc(queue, queue->rx_tail);
+ desc = macb_rx_desc(queue, rxq->tail);
}
}
@@ -5829,6 +5909,8 @@ static int macb_probe(struct platform_device *pdev)
bp->rx_clk = rx_clk;
bp->tsu_clk = tsu_clk;
bp->jumbo_max_len = macb_config->jumbo_max_len;
+ bp->configured_rx_ring_size = DEFAULT_RX_RING_SIZE;
+ bp->configured_tx_ring_size = DEFAULT_TX_RING_SIZE;
if (!hw_is_gem(bp->regs, bp->native_io))
bp->max_tx_length = MACB_MAX_TX_LEN;
--
2.53.0
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH net-next 07/11] net: macb: avoid macb_init_rx_buffer_size() modifying state
2026-04-01 16:39 [PATCH net-next 00/11] net: macb: implement context swapping Théo Lebrun
` (5 preceding siblings ...)
2026-04-01 16:39 ` [PATCH net-next 06/11] net: macb: introduce macb_context struct for buffer management Théo Lebrun
@ 2026-04-01 16:39 ` Théo Lebrun
2026-04-01 16:39 ` [PATCH net-next 08/11] net: macb: make `struct macb` subset reachable from macb_context struct Théo Lebrun
` (4 subsequent siblings)
11 siblings, 0 replies; 24+ messages in thread
From: Théo Lebrun @ 2026-04-01 16:39 UTC (permalink / raw)
To: Nicolas Ferre, Claudiu Beznea, Andrew Lunn, David S. Miller,
Eric Dumazet, Jakub Kicinski, Paolo Abeni, Richard Cochran,
Russell King
Cc: Paolo Valerio, Conor Dooley, Nicolai Buchwitz,
Vladimir Kondratiev, Gregory CLEMENT, Benoît Monin,
Tawfik Bayouk, Thomas Petazzoni, Maxime Chevallier, netdev,
linux-kernel, Théo Lebrun
macb_init_rx_buffer_size() takes the macb private data struct and
overrides its bp->ctx->rx_buffer_size. To make it usable with multiple
contexts, make it return its value.
Also, move the `bufsz` computation into it. The value is only used if
GEM, and for historical reason it currently lives in macb_open().
Signed-off-by: Théo Lebrun <theo.lebrun@bootlin.com>
---
drivers/net/ethernet/cadence/macb_main.c | 26 +++++++++++++-------------
1 file changed, 13 insertions(+), 13 deletions(-)
diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
index 0f63d9b89c11..033c36d8a3d4 100644
--- a/drivers/net/ethernet/cadence/macb_main.c
+++ b/drivers/net/ethernet/cadence/macb_main.c
@@ -2641,25 +2641,26 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *skb,
return ret;
}
-static void macb_init_rx_buffer_size(struct macb *bp, size_t size)
+static unsigned int macb_rx_buffer_size(struct macb *bp, unsigned int mtu)
{
- if (!macb_is_gem(bp)) {
- bp->ctx->rx_buffer_size = MACB_RX_BUFFER_SIZE;
- } else {
- bp->ctx->rx_buffer_size = MIN(size, RX_BUFFER_MAX);
+ unsigned int size;
- if (bp->ctx->rx_buffer_size % RX_BUFFER_MULTIPLE) {
+ if (!macb_is_gem(bp)) {
+ size = MACB_RX_BUFFER_SIZE;
+ } else {
+ size = mtu + ETH_HLEN + ETH_FCS_LEN + NET_IP_ALIGN;
+ size = MIN(size, RX_BUFFER_MAX);
+
+ if (size % RX_BUFFER_MULTIPLE) {
netdev_dbg(bp->netdev,
"RX buffer must be multiple of %d bytes, expanding\n",
RX_BUFFER_MULTIPLE);
- bp->ctx->rx_buffer_size =
- roundup(bp->ctx->rx_buffer_size,
- RX_BUFFER_MULTIPLE);
+ size = roundup(size, RX_BUFFER_MULTIPLE);
}
}
- netdev_dbg(bp->netdev, "mtu [%u] rx_buffer_size [%u]\n",
- bp->netdev->mtu, bp->ctx->rx_buffer_size);
+ netdev_dbg(bp->netdev, "mtu [%u] rx_buffer_size [%u]\n", mtu, size);
+ return size;
}
static void gem_free_rx_buffers(struct macb *bp)
@@ -3204,7 +3205,6 @@ static void macb_set_rx_mode(struct net_device *netdev)
static int macb_open(struct net_device *netdev)
{
- size_t bufsz = netdev->mtu + ETH_HLEN + ETH_FCS_LEN + NET_IP_ALIGN;
struct macb *bp = netdev_priv(netdev);
struct macb_queue *queue;
unsigned int q;
@@ -3223,7 +3223,7 @@ static int macb_open(struct net_device *netdev)
}
/* RX buffers initialization */
- macb_init_rx_buffer_size(bp, bufsz);
+ bp->ctx->rx_buffer_size = macb_rx_buffer_size(bp, netdev->mtu);
bp->ctx->rx_ring_size = bp->configured_rx_ring_size;
bp->ctx->tx_ring_size = bp->configured_tx_ring_size;
--
2.53.0
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH net-next 08/11] net: macb: make `struct macb` subset reachable from macb_context struct
2026-04-01 16:39 [PATCH net-next 00/11] net: macb: implement context swapping Théo Lebrun
` (6 preceding siblings ...)
2026-04-01 16:39 ` [PATCH net-next 07/11] net: macb: avoid macb_init_rx_buffer_size() modifying state Théo Lebrun
@ 2026-04-01 16:39 ` Théo Lebrun
2026-04-01 16:39 ` [PATCH net-next 09/11] net: macb: introduce macb_context_alloc() helper Théo Lebrun
` (3 subsequent siblings)
11 siblings, 0 replies; 24+ messages in thread
From: Théo Lebrun @ 2026-04-01 16:39 UTC (permalink / raw)
To: Nicolas Ferre, Claudiu Beznea, Andrew Lunn, David S. Miller,
Eric Dumazet, Jakub Kicinski, Paolo Abeni, Richard Cochran,
Russell King
Cc: Paolo Valerio, Conor Dooley, Nicolai Buchwitz,
Vladimir Kondratiev, Gregory CLEMENT, Benoît Monin,
Tawfik Bayouk, Thomas Petazzoni, Maxime Chevallier, netdev,
linux-kernel, Théo Lebrun
For parallel MACB context to start become a reality, many functions need
to stop operating on bp->ctx (the currently active context) and instead
work on a context they get passed. That context might be
(1) the new one that is getting allocated and initialised, or,
(2) the old one to be freed.
To reduce bug surface area, we will taint those functions to *only* take
a context and no `struct macb *bp`. That way, no bug of using `bp->ctx`
instead of `ctx` will ever occur.
For that, we need to embed a subset of `struct macb` information into
each context so that all helpers can still do their jobs. That subset
must be constant once probe is completed. Do this by taking a pointer
to a subset of macb called `struct macb_info`.
That subset is accessible from context (ctx->info->caps) or
from bp (bp->caps) using `-fms-extensions` option, thanks to
commit c4781dc3d1cf ("Kbuild: enable -fms-extensions").
https://gcc.gnu.org/onlinedocs/gcc/Unnamed-Fields.html
Add the structure and assign ctx->info at alloc,
but nothing uses it yet.
Signed-off-by: Théo Lebrun <theo.lebrun@bootlin.com>
---
drivers/net/ethernet/cadence/macb.h | 58 ++--
drivers/net/ethernet/cadence/macb_main.c | 474 ++++++++++++++++---------------
drivers/net/ethernet/cadence/macb_ptp.c | 8 +-
3 files changed, 291 insertions(+), 249 deletions(-)
diff --git a/drivers/net/ethernet/cadence/macb.h b/drivers/net/ethernet/cadence/macb.h
index 8821205e8875..66e3638b84c0 100644
--- a/drivers/net/ethernet/cadence/macb.h
+++ b/drivers/net/ethernet/cadence/macb.h
@@ -840,7 +840,7 @@
*/
#define macb_or_gem_writel(__bp, __reg, __value) \
({ \
- if (macb_is_gem((__bp))) \
+ if (macb_is_gem((__bp)->caps)) \
gem_writel((__bp), __reg, __value); \
else \
macb_writel((__bp), __reg, __value); \
@@ -849,7 +849,7 @@
#define macb_or_gem_readl(__bp, __reg) \
({ \
u32 __v; \
- if (macb_is_gem((__bp))) \
+ if (macb_is_gem((__bp)->caps)) \
__v = gem_readl((__bp), __reg); \
else \
__v = macb_readl((__bp), __reg); \
@@ -1196,11 +1196,12 @@ static const struct gem_statistic queue_statistics[] = {
struct macb;
struct macb_queue;
+struct macb_context;
struct macb_or_gem_ops {
- int (*mog_alloc_rx_buffers)(struct macb *bp);
- void (*mog_free_rx_buffers)(struct macb *bp);
- void (*mog_init_rings)(struct macb *bp);
+ int (*mog_alloc_rx_buffers)(struct macb_context *ctx);
+ void (*mog_free_rx_buffers)(struct macb_context *ctx);
+ void (*mog_init_rings)(struct macb_context *ctx);
int (*mog_rx)(struct macb_queue *queue, struct napi_struct *napi,
int budget);
};
@@ -1290,6 +1291,16 @@ struct ethtool_rx_fs_list {
unsigned int count;
};
+struct macb_info {
+ struct platform_device *pdev;
+ struct net_device *netdev;
+ struct macb_or_gem_ops macbgem_ops;
+ unsigned int num_queues;
+ u32 caps;
+ int rx_bd_rd_prefetch;
+ int tx_bd_rd_prefetch;
+};
+
struct macb_rxq {
struct macb_dma_desc *ring; /* MACB & GEM */
dma_addr_t ring_dma; /* MACB & GEM */
@@ -1309,6 +1320,8 @@ struct macb_txq {
};
struct macb_context {
+ const struct macb_info *info;
+
unsigned int rx_buffer_size;
unsigned int rx_ring_size;
unsigned int tx_ring_size;
@@ -1324,6 +1337,15 @@ struct macb {
u32 (*macb_reg_readl)(struct macb *bp, int offset);
void (*macb_reg_writel)(struct macb *bp, int offset, u32 value);
+ /*
+ * Give direct access (bp->caps) and
+ * allow taking a pointer to it (&bp->info) for contexts.
+ */
+ union {
+ struct macb_info;
+ struct macb_info info;
+ };
+
/*
* Context stores all its parameters.
* But we must remember them across closure.
@@ -1335,17 +1357,14 @@ struct macb {
struct macb_dma_desc *rx_ring_tieoff;
dma_addr_t rx_ring_tieoff_dma;
- unsigned int num_queues;
struct macb_queue queues[MACB_MAX_QUEUES];
spinlock_t lock;
- struct platform_device *pdev;
struct clk *pclk;
struct clk *hclk;
struct clk *tx_clk;
struct clk *rx_clk;
struct clk *tsu_clk;
- struct net_device *netdev;
/* Protects hw_stats and ethtool_stats */
spinlock_t stats_lock;
union {
@@ -1353,15 +1372,12 @@ struct macb {
struct gem_stats gem;
} hw_stats;
- struct macb_or_gem_ops macbgem_ops;
-
struct mii_bus *mii_bus;
struct phylink *phylink;
struct phylink_config phylink_config;
struct phylink_pcs phylink_usx_pcs;
struct phylink_pcs phylink_sgmii_pcs;
- u32 caps;
unsigned int dma_burst_length;
phy_interface_t phy_interface;
@@ -1404,9 +1420,6 @@ struct macb {
struct delayed_work tx_lpi_work;
u32 tx_lpi_timer;
- int rx_bd_rd_prefetch;
- int tx_bd_rd_prefetch;
-
u32 rx_intr_mask;
struct macb_pm_data pm_data;
@@ -1458,14 +1471,15 @@ static inline void gem_ptp_do_txstamp(struct macb *bp, struct sk_buff *skb, stru
static inline void gem_ptp_do_rxstamp(struct macb *bp, struct sk_buff *skb, struct macb_dma_desc *desc) { }
#endif
-static inline bool macb_is_gem(struct macb *bp)
+static inline bool macb_is_gem(u32 caps)
{
- return !!(bp->caps & MACB_CAPS_MACB_IS_GEM);
+ return !!(caps & MACB_CAPS_MACB_IS_GEM);
}
-static inline bool gem_has_ptp(struct macb *bp)
+static inline bool gem_has_ptp(u32 caps)
{
- return IS_ENABLED(CONFIG_MACB_USE_HWSTAMP) && (bp->caps & MACB_CAPS_GEM_HAS_PTP);
+ return IS_ENABLED(CONFIG_MACB_USE_HWSTAMP) &&
+ (caps & MACB_CAPS_GEM_HAS_PTP);
}
/* ENST Helper functions */
@@ -1481,16 +1495,16 @@ static inline u64 enst_max_hw_interval(u32 speed_mbps)
ENST_TIME_GRANULARITY_NS * 1000, (speed_mbps));
}
-static inline bool macb_dma64(struct macb *bp)
+static inline bool macb_dma64(u32 caps)
{
return IS_ENABLED(CONFIG_ARCH_DMA_ADDR_T_64BIT) &&
- bp->caps & MACB_CAPS_DMA_64B;
+ caps & MACB_CAPS_DMA_64B;
}
-static inline bool macb_dma_ptp(struct macb *bp)
+static inline bool macb_dma_ptp(u32 caps)
{
return IS_ENABLED(CONFIG_MACB_USE_HWSTAMP) &&
- bp->caps & MACB_CAPS_DMA_PTP;
+ caps & MACB_CAPS_DMA_PTP;
}
/**
diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
index 033c36d8a3d4..47f0d27cd979 100644
--- a/drivers/net/ethernet/cadence/macb_main.c
+++ b/drivers/net/ethernet/cadence/macb_main.c
@@ -122,33 +122,36 @@ struct sifive_fu540_macb_mgmt {
* word 5: timestamp word 1
* word 6: timestamp word 2
*/
-static unsigned int macb_dma_desc_get_size(struct macb *bp)
+static unsigned int macb_dma_desc_get_size(u32 caps)
{
unsigned int desc_size = sizeof(struct macb_dma_desc);
- if (macb_dma64(bp))
+ if (macb_dma64(caps))
desc_size += sizeof(struct macb_dma_desc_64);
- if (macb_dma_ptp(bp))
+ if (macb_dma_ptp(caps))
desc_size += sizeof(struct macb_dma_desc_ptp);
return desc_size;
}
-static unsigned int macb_adj_dma_desc_idx(struct macb *bp, unsigned int desc_idx)
+static unsigned int macb_adj_dma_desc_idx(struct macb_context *ctx,
+ unsigned int desc_idx)
{
- return desc_idx * (1 + macb_dma64(bp) + macb_dma_ptp(bp));
+ return desc_idx * (1 + macb_dma64(ctx->info->caps) +
+ macb_dma_ptp(ctx->info->caps));
}
-static struct macb_dma_desc_64 *macb_64b_desc(struct macb *bp, struct macb_dma_desc *desc)
+static struct macb_dma_desc_64 *macb_64b_desc(struct macb_dma_desc *desc)
{
return (struct macb_dma_desc_64 *)((void *)desc
+ sizeof(struct macb_dma_desc));
}
/* Ring buffer accessors */
-static unsigned int macb_tx_ring_wrap(struct macb *bp, unsigned int index)
+static unsigned int macb_tx_ring_wrap(struct macb_context *ctx,
+ unsigned int index)
{
- return index & (bp->ctx->tx_ring_size - 1);
+ return index & (ctx->tx_ring_size - 1);
}
static struct macb_txq *macb_txq(struct macb_queue *queue)
@@ -167,14 +170,13 @@ static struct macb_rxq *macb_rxq(struct macb_queue *queue)
return &bp->ctx->rxq[q];
}
-static struct macb_dma_desc *macb_tx_desc(struct macb_queue *queue,
+static struct macb_dma_desc *macb_tx_desc(struct macb_context *ctx,
+ unsigned int q,
unsigned int index)
{
- struct macb_txq *txq = macb_txq(queue);
-
- index = macb_tx_ring_wrap(queue->bp, index);
- index = macb_adj_dma_desc_idx(queue->bp, index);
- return &txq->ring[index];
+ index = macb_tx_ring_wrap(ctx, index);
+ index = macb_adj_dma_desc_idx(ctx, index);
+ return &ctx->txq[q].ring[index];
}
static struct macb_tx_skb *macb_tx_skb(struct macb_queue *queue,
@@ -182,40 +184,42 @@ static struct macb_tx_skb *macb_tx_skb(struct macb_queue *queue,
{
struct macb_txq *txq = macb_txq(queue);
- return &txq->skb[macb_tx_ring_wrap(queue->bp, index)];
+ return &txq->skb[macb_tx_ring_wrap(queue->bp->ctx, index)];
}
static dma_addr_t macb_tx_dma(struct macb_queue *queue, unsigned int index)
{
+ struct macb_context *ctx = queue->bp->ctx;
struct macb_txq *txq = macb_txq(queue);
dma_addr_t offset;
- offset = macb_tx_ring_wrap(queue->bp, index) *
- macb_dma_desc_get_size(queue->bp);
+ offset = macb_tx_ring_wrap(ctx, index) *
+ macb_dma_desc_get_size(queue->bp->caps);
return txq->ring_dma + offset;
}
-static unsigned int macb_rx_ring_wrap(struct macb *bp, unsigned int index)
+static unsigned int macb_rx_ring_wrap(struct macb_context *ctx,
+ unsigned int index)
{
- return index & (bp->ctx->rx_ring_size - 1);
+ return index & (ctx->rx_ring_size - 1);
}
-static struct macb_dma_desc *macb_rx_desc(struct macb_queue *queue, unsigned int index)
+static struct macb_dma_desc *macb_rx_desc(struct macb_context *ctx,
+ unsigned int q, unsigned int index)
{
- struct macb_rxq *rxq = macb_rxq(queue);
-
- index = macb_rx_ring_wrap(queue->bp, index);
- index = macb_adj_dma_desc_idx(queue->bp, index);
- return &rxq->ring[index];
+ index = macb_rx_ring_wrap(ctx, index);
+ index = macb_adj_dma_desc_idx(ctx, index);
+ return &ctx->rxq[q].ring[index];
}
static void *macb_rx_buffer(struct macb_queue *queue, unsigned int index)
{
+ struct macb_context *ctx = queue->bp->ctx;
struct macb_rxq *rxq = macb_rxq(queue);
- return rxq->buffers + queue->bp->ctx->rx_buffer_size *
- macb_rx_ring_wrap(queue->bp, index);
+ return rxq->buffers + ctx->rx_buffer_size *
+ macb_rx_ring_wrap(ctx, index);
}
/* I/O accessors */
@@ -278,7 +282,7 @@ static void macb_set_hwaddr(struct macb *bp)
top = get_unaligned_le16(bp->netdev->dev_addr + 4);
macb_or_gem_writel(bp, SA1T, top);
- if (gem_has_ptp(bp)) {
+ if (gem_has_ptp(bp->caps)) {
gem_writel(bp, RXPTPUNI, bottom);
gem_writel(bp, TXPTPUNI, bottom);
}
@@ -489,7 +493,7 @@ static void macb_init_buffers(struct macb *bp)
unsigned int q;
/* Single register for all queues' high 32 bits. */
- if (macb_dma64(bp)) {
+ if (macb_dma64(bp->caps)) {
rxq = &bp->ctx->rxq[0];
txq = &bp->ctx->txq[0];
macb_writel(bp, RBQPH, upper_32_bits(rxq->ring_dma));
@@ -772,7 +776,7 @@ static void macb_mac_config(struct phylink_config *config, unsigned int mode,
if (bp->caps & MACB_CAPS_MACB_IS_EMAC) {
if (state->interface == PHY_INTERFACE_MODE_RMII)
ctrl |= MACB_BIT(RM9200_RMII);
- } else if (macb_is_gem(bp)) {
+ } else if (macb_is_gem(bp->caps)) {
ctrl &= ~(GEM_BIT(SGMIIEN) | GEM_BIT(PCSSEL));
ncr &= ~GEM_BIT(ENABLE_HS_MAC);
@@ -824,13 +828,14 @@ static void gem_shuffle_tx_one_ring(struct macb_queue *queue)
unsigned int head, tail, count, ring_size, desc_size;
struct macb_tx_skb tx_skb, *skb_curr, *skb_next;
struct macb_dma_desc *desc_curr, *desc_next;
+ unsigned int q = queue - queue->bp->queues;
unsigned int i, cycles, shift, curr, next;
+ struct macb_context *ctx = queue->bp->ctx;
struct macb_txq *txq = macb_txq(queue);
- struct macb *bp = queue->bp;
unsigned char desc[24];
unsigned long flags;
- desc_size = macb_dma_desc_get_size(bp);
+ desc_size = macb_dma_desc_get_size(queue->bp->caps);
if (WARN_ON_ONCE(desc_size > ARRAY_SIZE(desc)))
return;
@@ -838,7 +843,7 @@ static void gem_shuffle_tx_one_ring(struct macb_queue *queue)
spin_lock_irqsave(&queue->tx_ptr_lock, flags);
head = txq->head;
tail = txq->tail;
- ring_size = bp->ctx->tx_ring_size;
+ ring_size = ctx->tx_ring_size;
count = CIRC_CNT(head, tail, ring_size);
if (!(tail % ring_size))
@@ -854,7 +859,7 @@ static void gem_shuffle_tx_one_ring(struct macb_queue *queue)
cycles = gcd(ring_size, shift);
for (i = 0; i < cycles; i++) {
- memcpy(&desc, macb_tx_desc(queue, i), desc_size);
+ memcpy(&desc, macb_tx_desc(ctx, q, i), desc_size);
memcpy(&tx_skb, macb_tx_skb(queue, i),
sizeof(struct macb_tx_skb));
@@ -862,8 +867,8 @@ static void gem_shuffle_tx_one_ring(struct macb_queue *queue)
next = (curr + shift) % ring_size;
while (next != i) {
- desc_curr = macb_tx_desc(queue, curr);
- desc_next = macb_tx_desc(queue, next);
+ desc_curr = macb_tx_desc(ctx, q, curr);
+ desc_next = macb_tx_desc(ctx, q, next);
memcpy(desc_curr, desc_next, desc_size);
@@ -880,7 +885,7 @@ static void gem_shuffle_tx_one_ring(struct macb_queue *queue)
next = (curr + shift) % ring_size;
}
- desc_curr = macb_tx_desc(queue, curr);
+ desc_curr = macb_tx_desc(ctx, q, curr);
memcpy(desc_curr, &desc, desc_size);
if (i == ring_size - 1)
desc_curr->ctrl &= ~MACB_BIT(TX_WRAP);
@@ -937,7 +942,7 @@ static void macb_mac_link_up(struct phylink_config *config,
if (!(bp->caps & MACB_CAPS_MACB_IS_EMAC)) {
ctrl &= ~MACB_BIT(PAE);
- if (macb_is_gem(bp)) {
+ if (macb_is_gem(bp->caps)) {
ctrl &= ~GEM_BIT(GBE);
if (speed == SPEED_1000)
@@ -968,7 +973,7 @@ static void macb_mac_link_up(struct phylink_config *config,
/* Enable Rx and Tx; Enable PTP unicast */
ctrl = macb_readl(bp, NCR);
- if (gem_has_ptp(bp))
+ if (gem_has_ptp(bp->caps))
ctrl |= MACB_BIT(PTPUNI);
macb_writel(bp, NCR, ctrl | MACB_BIT(RE) | MACB_BIT(TE));
@@ -1078,7 +1083,8 @@ static int macb_mii_probe(struct net_device *netdev)
bp->phylink_config.supported_interfaces);
/* Determine what modes are supported */
- if (macb_is_gem(bp) && (bp->caps & MACB_CAPS_GIGABIT_MODE_AVAILABLE)) {
+ if (macb_is_gem(bp->caps) &&
+ (bp->caps & MACB_CAPS_GIGABIT_MODE_AVAILABLE)) {
bp->phylink_config.mac_capabilities |= MAC_1000FD;
if (!(bp->caps & MACB_CAPS_NO_GIGABIT_HALF))
bp->phylink_config.mac_capabilities |= MAC_1000HD;
@@ -1246,12 +1252,13 @@ static void macb_tx_unmap(struct macb *bp, struct macb_tx_skb *tx_skb, int budge
}
}
-static void macb_set_addr(struct macb *bp, struct macb_dma_desc *desc, dma_addr_t addr)
+static void macb_set_addr(struct macb_context *ctx, struct macb_dma_desc *desc,
+ dma_addr_t addr)
{
- if (macb_dma64(bp)) {
+ if (macb_dma64(ctx->info->caps)) {
struct macb_dma_desc_64 *desc_64;
- desc_64 = macb_64b_desc(bp, desc);
+ desc_64 = macb_64b_desc(desc);
desc_64->addrh = upper_32_bits(addr);
/* The low bits of RX address contain the RX_USED bit, clearing
* of which allows packet RX. Make sure the high bits are also
@@ -1263,18 +1270,19 @@ static void macb_set_addr(struct macb *bp, struct macb_dma_desc *desc, dma_addr_
desc->addr = lower_32_bits(addr);
}
-static dma_addr_t macb_get_addr(struct macb *bp, struct macb_dma_desc *desc)
+static dma_addr_t macb_get_addr(struct macb_context *ctx,
+ struct macb_dma_desc *desc)
{
dma_addr_t addr = 0;
- if (macb_dma64(bp)) {
+ if (macb_dma64(ctx->info->caps)) {
struct macb_dma_desc_64 *desc_64;
- desc_64 = macb_64b_desc(bp, desc);
+ desc_64 = macb_64b_desc(desc);
addr = ((u64)(desc_64->addrh) << 32);
}
addr |= MACB_BF(RX_WADDR, MACB_BFEXT(RX_WADDR, desc->addr));
- if (macb_dma_ptp(bp))
+ if (macb_dma_ptp(ctx->info->caps))
addr &= ~GEM_BIT(DMA_RXVALID);
return addr;
}
@@ -1284,6 +1292,7 @@ static void macb_tx_error_task(struct work_struct *work)
struct macb_queue *queue = container_of(work, struct macb_queue,
tx_error_task);
unsigned int q = queue - queue->bp->queues;
+ struct macb_context *ctx = queue->bp->ctx;
struct macb_txq *txq = macb_txq(queue);
struct macb *bp = queue->bp;
struct macb_tx_skb *tx_skb;
@@ -1326,7 +1335,7 @@ static void macb_tx_error_task(struct work_struct *work)
for (tail = txq->tail; tail != txq->head; tail++) {
u32 ctrl;
- desc = macb_tx_desc(queue, tail);
+ desc = macb_tx_desc(ctx, q, tail);
ctrl = desc->ctrl;
tx_skb = macb_tx_skb(queue, tail);
skb = tx_skb->skb;
@@ -1345,7 +1354,7 @@ static void macb_tx_error_task(struct work_struct *work)
*/
if (!(ctrl & MACB_BIT(TX_BUF_EXHAUSTED))) {
netdev_vdbg(bp->netdev, "txerr skb %u (data %p) TX complete\n",
- macb_tx_ring_wrap(bp, tail),
+ macb_tx_ring_wrap(ctx, tail),
skb->data);
bp->netdev->stats.tx_packets++;
queue->stats.tx_packets++;
@@ -1373,8 +1382,8 @@ static void macb_tx_error_task(struct work_struct *work)
packets, bytes);
/* Set end of TX queue */
- desc = macb_tx_desc(queue, 0);
- macb_set_addr(bp, desc, 0);
+ desc = macb_tx_desc(ctx, q, 0);
+ macb_set_addr(ctx, desc, 0);
desc->ctrl = MACB_BIT(TX_USED);
/* Make descriptor updates visible to hardware */
@@ -1436,6 +1445,7 @@ static int macb_tx_complete(struct macb_queue *queue, int budget)
struct macb *bp = queue->bp;
struct macb_txq *txq = macb_txq(queue);
unsigned int q = queue - bp->queues;
+ struct macb_context *ctx = bp->ctx;
unsigned long flags;
unsigned int tail;
unsigned int head;
@@ -1450,7 +1460,7 @@ static int macb_tx_complete(struct macb_queue *queue, int budget)
struct macb_dma_desc *desc;
u32 ctrl;
- desc = macb_tx_desc(queue, tail);
+ desc = macb_tx_desc(ctx, q, tail);
/* Make hw descriptor updates visible to CPU */
rmb();
@@ -1475,7 +1485,7 @@ static int macb_tx_complete(struct macb_queue *queue, int budget)
gem_ptp_do_txstamp(bp, skb, desc);
netdev_vdbg(bp->netdev, "skb %u (data %p) TX complete\n",
- macb_tx_ring_wrap(bp, tail),
+ macb_tx_ring_wrap(ctx, tail),
skb->data);
bp->netdev->stats.tx_packets++;
queue->stats.tx_packets++;
@@ -1513,53 +1523,53 @@ static int macb_tx_complete(struct macb_queue *queue, int budget)
return packets;
}
-static void gem_rx_refill(struct macb_queue *queue)
+static void gem_rx_refill(struct macb_context *ctx, unsigned int q)
{
- struct macb_rxq *rxq = macb_rxq(queue);
- struct macb *bp = queue->bp;
+ struct device *dev = &ctx->info->pdev->dev;
+ struct macb_rxq *rxq = &ctx->rxq[q];
struct macb_dma_desc *desc;
struct sk_buff *skb;
unsigned int entry;
dma_addr_t paddr;
while (CIRC_SPACE(rxq->prepared_head, rxq->tail,
- bp->ctx->rx_ring_size) > 0) {
- entry = macb_rx_ring_wrap(bp, rxq->prepared_head);
+ ctx->rx_ring_size) > 0) {
+ entry = macb_rx_ring_wrap(ctx, rxq->prepared_head);
/* Make hw descriptor updates visible to CPU */
rmb();
- desc = macb_rx_desc(queue, entry);
+ desc = macb_rx_desc(ctx, q, entry);
if (!rxq->skbuff[entry]) {
/* allocate sk_buff for this free entry in ring */
- skb = netdev_alloc_skb(bp->netdev,
- bp->ctx->rx_buffer_size);
+ skb = netdev_alloc_skb(ctx->info->netdev,
+ ctx->rx_buffer_size);
if (unlikely(!skb)) {
- netdev_err(bp->netdev,
+ netdev_err(ctx->info->netdev,
"Unable to allocate sk_buff\n");
break;
}
/* now fill corresponding descriptor entry */
- paddr = dma_map_single(&bp->pdev->dev, skb->data,
- bp->ctx->rx_buffer_size,
+ paddr = dma_map_single(dev, skb->data,
+ ctx->rx_buffer_size,
DMA_FROM_DEVICE);
- if (dma_mapping_error(&bp->pdev->dev, paddr)) {
+ if (dma_mapping_error(dev, paddr)) {
dev_kfree_skb(skb);
break;
}
rxq->skbuff[entry] = skb;
- if (entry == bp->ctx->rx_ring_size - 1)
+ if (entry == ctx->rx_ring_size - 1)
paddr |= MACB_BIT(RX_WRAP);
desc->ctrl = 0;
/* Setting addr clears RX_USED and allows reception,
* make sure ctrl is cleared first to avoid a race.
*/
dma_wmb();
- macb_set_addr(bp, desc, paddr);
+ macb_set_addr(ctx, desc, paddr);
/* Properly align Ethernet header.
*
@@ -1572,7 +1582,7 @@ static void gem_rx_refill(struct macb_queue *queue)
* setting the low 2/3 bits.
* It is 3 bits if HW_DMA_CAP_PTP, else 2 bits.
*/
- if (!(bp->caps & MACB_CAPS_RSC))
+ if (!(ctx->info->caps & MACB_CAPS_RSC))
skb_reserve(skb, NET_IP_ALIGN);
} else {
desc->ctrl = 0;
@@ -1585,18 +1595,21 @@ static void gem_rx_refill(struct macb_queue *queue)
/* Make descriptor updates visible to hardware */
wmb();
- netdev_vdbg(bp->netdev, "rx ring: queue: %p, prepared head %d, tail %d\n",
- queue, rxq->prepared_head, rxq->tail);
+ netdev_vdbg(ctx->info->netdev,
+ "rx ring: queue: %u, prepared head %d, tail %d\n",
+ q, rxq->prepared_head, rxq->tail);
}
/* Mark DMA descriptors from begin up to and not including end as unused */
static void discard_partial_frame(struct macb_queue *queue, unsigned int begin,
unsigned int end)
{
+ unsigned int q = queue - queue->bp->queues;
+ struct macb_context *ctx = queue->bp->ctx;
unsigned int frag;
for (frag = begin; frag != end; frag++) {
- struct macb_dma_desc *desc = macb_rx_desc(queue, frag);
+ struct macb_dma_desc *desc = macb_rx_desc(ctx, q, frag);
desc->addr &= ~MACB_BIT(RX_USED);
}
@@ -1613,6 +1626,8 @@ static void discard_partial_frame(struct macb_queue *queue, unsigned int begin,
static int gem_rx(struct macb_queue *queue, struct napi_struct *napi,
int budget)
{
+ unsigned int q = queue - queue->bp->queues;
+ struct macb_context *ctx = queue->bp->ctx;
struct macb_rxq *rxq = macb_rxq(queue);
struct macb *bp = queue->bp;
struct macb_dma_desc *desc;
@@ -1626,14 +1641,14 @@ static int gem_rx(struct macb_queue *queue, struct napi_struct *napi,
dma_addr_t addr;
bool rxused;
- entry = macb_rx_ring_wrap(bp, rxq->tail);
- desc = macb_rx_desc(queue, entry);
+ entry = macb_rx_ring_wrap(ctx, rxq->tail);
+ desc = macb_rx_desc(ctx, q, entry);
/* Make hw descriptor updates visible to CPU */
rmb();
rxused = (desc->addr & MACB_BIT(RX_USED)) ? true : false;
- addr = macb_get_addr(bp, desc);
+ addr = macb_get_addr(ctx, desc);
if (!rxused)
break;
@@ -1697,7 +1712,7 @@ static int gem_rx(struct macb_queue *queue, struct napi_struct *napi,
napi_gro_receive(napi, skb);
}
- gem_rx_refill(queue);
+ gem_rx_refill(ctx, q);
return count;
}
@@ -1705,6 +1720,8 @@ static int gem_rx(struct macb_queue *queue, struct napi_struct *napi,
static int macb_rx_frame(struct macb_queue *queue, struct napi_struct *napi,
unsigned int first_frag, unsigned int last_frag)
{
+ unsigned int q = queue - queue->bp->queues;
+ struct macb_context *ctx = queue->bp->ctx;
struct macb *bp = queue->bp;
struct macb_dma_desc *desc;
unsigned int offset;
@@ -1712,12 +1729,12 @@ static int macb_rx_frame(struct macb_queue *queue, struct napi_struct *napi,
unsigned int frag;
unsigned int len;
- desc = macb_rx_desc(queue, last_frag);
+ desc = macb_rx_desc(ctx, q, last_frag);
len = desc->ctrl & bp->rx_frm_len_mask;
netdev_vdbg(bp->netdev, "macb_rx_frame frags %u - %u (len %u)\n",
- macb_rx_ring_wrap(bp, first_frag),
- macb_rx_ring_wrap(bp, last_frag), len);
+ macb_rx_ring_wrap(ctx, first_frag),
+ macb_rx_ring_wrap(ctx, last_frag), len);
/* The ethernet header starts NET_IP_ALIGN bytes into the
* first buffer. Since the header is 14 bytes, this makes the
@@ -1731,7 +1748,7 @@ static int macb_rx_frame(struct macb_queue *queue, struct napi_struct *napi,
if (!skb) {
bp->netdev->stats.rx_dropped++;
for (frag = first_frag; ; frag++) {
- desc = macb_rx_desc(queue, frag);
+ desc = macb_rx_desc(ctx, q, frag);
desc->addr &= ~MACB_BIT(RX_USED);
if (frag == last_frag)
break;
@@ -1762,7 +1779,7 @@ static int macb_rx_frame(struct macb_queue *queue, struct napi_struct *napi,
macb_rx_buffer(queue, frag),
frag_len);
offset += bp->ctx->rx_buffer_size;
- desc = macb_rx_desc(queue, frag);
+ desc = macb_rx_desc(ctx, q, frag);
desc->addr &= ~MACB_BIT(RX_USED);
if (frag == last_frag)
@@ -1784,20 +1801,19 @@ static int macb_rx_frame(struct macb_queue *queue, struct napi_struct *napi,
return 0;
}
-static inline void macb_init_rx_ring(struct macb_queue *queue)
+static inline void macb_init_rx_ring(struct macb_context *ctx, unsigned int q)
{
- struct macb_rxq *rxq = macb_rxq(queue);
+ struct macb_rxq *rxq = &ctx->rxq[q];
struct macb_dma_desc *desc = NULL;
- struct macb *bp = queue->bp;
dma_addr_t addr;
int i;
addr = rxq->buffers_dma;
- for (i = 0; i < bp->ctx->rx_ring_size; i++) {
- desc = macb_rx_desc(queue, i);
- macb_set_addr(bp, desc, addr);
+ for (i = 0; i < ctx->rx_ring_size; i++) {
+ desc = macb_rx_desc(ctx, q, i);
+ macb_set_addr(ctx, desc, addr);
desc->ctrl = 0;
- addr += bp->ctx->rx_buffer_size;
+ addr += ctx->rx_buffer_size;
}
desc->addr |= MACB_BIT(RX_WRAP);
rxq->tail = 0;
@@ -1806,6 +1822,8 @@ static inline void macb_init_rx_ring(struct macb_queue *queue)
static int macb_rx(struct macb_queue *queue, struct napi_struct *napi,
int budget)
{
+ unsigned int q = queue - queue->bp->queues;
+ struct macb_context *ctx = queue->bp->ctx;
struct macb_rxq *rxq = macb_rxq(queue);
struct macb *bp = queue->bp;
bool reset_rx_queue = false;
@@ -1814,7 +1832,7 @@ static int macb_rx(struct macb_queue *queue, struct napi_struct *napi,
int received = 0;
for (tail = rxq->tail; budget > 0; tail++) {
- struct macb_dma_desc *desc = macb_rx_desc(queue, tail);
+ struct macb_dma_desc *desc = macb_rx_desc(ctx, q, tail);
u32 ctrl;
/* Make hw descriptor updates visible to CPU */
@@ -1866,7 +1884,7 @@ static int macb_rx(struct macb_queue *queue, struct napi_struct *napi,
ctrl = macb_readl(bp, NCR);
macb_writel(bp, NCR, ctrl & ~MACB_BIT(RE));
- macb_init_rx_ring(queue);
+ macb_init_rx_ring(ctx, q);
queue_writel(queue, RBQP, rxq->ring_dma);
macb_writel(bp, NCR, ctrl | MACB_BIT(RE));
@@ -1885,13 +1903,14 @@ static int macb_rx(struct macb_queue *queue, struct napi_struct *napi,
static bool macb_rx_pending(struct macb_queue *queue)
{
+ unsigned int q = queue - queue->bp->queues;
+ struct macb_context *ctx = queue->bp->ctx;
struct macb_rxq *rxq = macb_rxq(queue);
- struct macb *bp = queue->bp;
struct macb_dma_desc *desc;
unsigned int entry;
- entry = macb_rx_ring_wrap(bp, rxq->tail);
- desc = macb_rx_desc(queue, entry);
+ entry = macb_rx_ring_wrap(ctx, rxq->tail);
+ desc = macb_rx_desc(ctx, q, entry);
/* Make hw descriptor updates visible to CPU */
rmb();
@@ -1939,6 +1958,7 @@ static int macb_rx_poll(struct napi_struct *napi, int budget)
static void macb_tx_restart(struct macb_queue *queue)
{
+ struct macb_context *ctx = queue->bp->ctx;
struct macb_txq *txq = macb_txq(queue);
struct macb *bp = queue->bp;
unsigned int head_idx, tbqp;
@@ -1949,9 +1969,9 @@ static void macb_tx_restart(struct macb_queue *queue)
if (txq->head == txq->tail)
goto out_tx_ptr_unlock;
- tbqp = queue_readl(queue, TBQP) / macb_dma_desc_get_size(bp);
- tbqp = macb_adj_dma_desc_idx(bp, macb_tx_ring_wrap(bp, tbqp));
- head_idx = macb_adj_dma_desc_idx(bp, macb_tx_ring_wrap(bp, txq->head));
+ tbqp = queue_readl(queue, TBQP) / macb_dma_desc_get_size(ctx->info->caps);
+ tbqp = macb_adj_dma_desc_idx(ctx, macb_tx_ring_wrap(ctx, tbqp));
+ head_idx = macb_adj_dma_desc_idx(ctx, macb_tx_ring_wrap(ctx, txq->head));
if (tbqp == head_idx)
goto out_tx_ptr_unlock;
@@ -1966,6 +1986,8 @@ static void macb_tx_restart(struct macb_queue *queue)
static bool macb_tx_complete_pending(struct macb_queue *queue)
{
+ unsigned int q = queue - queue->bp->queues;
+ struct macb_context *ctx = queue->bp->ctx;
struct macb_txq *txq = macb_txq(queue);
bool retval = false;
unsigned long flags;
@@ -1975,7 +1997,7 @@ static bool macb_tx_complete_pending(struct macb_queue *queue)
/* Make hw descriptor updates visible to CPU */
rmb();
- if (macb_tx_desc(queue, txq->tail)->ctrl & MACB_BIT(TX_USED))
+ if (macb_tx_desc(ctx, q, txq->tail)->ctrl & MACB_BIT(TX_USED))
retval = true;
}
spin_unlock_irqrestore(&queue->tx_ptr_lock, flags);
@@ -2029,6 +2051,7 @@ static void macb_hresp_error_task(struct work_struct *work)
{
struct macb *bp = from_work(bp, work, hresp_err_bh_work);
struct net_device *netdev = bp->netdev;
+ struct macb_context *ctx = bp->ctx;
struct macb_queue *queue;
unsigned int q;
u32 ctrl;
@@ -2045,7 +2068,7 @@ static void macb_hresp_error_task(struct work_struct *work)
netif_tx_stop_all_queues(netdev);
netif_carrier_off(netdev);
- bp->macbgem_ops.mog_init_rings(bp);
+ bp->macbgem_ops.mog_init_rings(ctx);
/* Initialize TX and RX buffers */
macb_init_buffers(bp);
@@ -2218,7 +2241,7 @@ static irqreturn_t macb_interrupt(int irq, void *dev_id)
if (status & MACB_BIT(ISR_ROVR)) {
/* We missed at least one packet */
spin_lock(&bp->stats_lock);
- if (macb_is_gem(bp))
+ if (macb_is_gem(bp->caps))
bp->hw_stats.gem.rx_overruns++;
else
bp->hw_stats.macb.rx_overruns++;
@@ -2270,6 +2293,8 @@ static unsigned int macb_tx_map(struct macb *bp,
unsigned int f, nr_frags = skb_shinfo(skb)->nr_frags;
unsigned int len, i, tx_head = txq->head;
u32 ctrl, lso_ctrl = 0, seq_ctrl = 0;
+ unsigned int q = queue - bp->queues;
+ struct macb_context *ctx = bp->ctx;
unsigned int eof = 1, mss_mfs = 0;
struct macb_tx_skb *tx_skb = NULL;
struct macb_dma_desc *desc;
@@ -2360,7 +2385,7 @@ static unsigned int macb_tx_map(struct macb *bp,
*/
i = tx_head;
ctrl = MACB_BIT(TX_USED);
- desc = macb_tx_desc(queue, i);
+ desc = macb_tx_desc(ctx, q, i);
desc->ctrl = ctrl;
if (lso_ctrl) {
@@ -2381,14 +2406,14 @@ static unsigned int macb_tx_map(struct macb *bp,
do {
i--;
tx_skb = macb_tx_skb(queue, i);
- desc = macb_tx_desc(queue, i);
+ desc = macb_tx_desc(ctx, q, i);
ctrl = (u32)tx_skb->size;
if (eof) {
ctrl |= MACB_BIT(TX_LAST);
eof = 0;
}
- if (unlikely(macb_tx_ring_wrap(bp, i) ==
+ if (unlikely(macb_tx_ring_wrap(ctx, i) ==
bp->ctx->tx_ring_size - 1))
ctrl |= MACB_BIT(TX_WRAP);
@@ -2407,7 +2432,7 @@ static unsigned int macb_tx_map(struct macb *bp,
ctrl |= MACB_BF(MSS_MFS, mss_mfs);
/* Set TX buffer descriptor */
- macb_set_addr(bp, desc, tx_skb->mapping);
+ macb_set_addr(ctx, desc, tx_skb->mapping);
/* desc->addr must be visible to hardware before clearing
* 'TX_USED' bit in desc->ctrl.
*/
@@ -2558,7 +2583,7 @@ static netdev_tx_t macb_start_xmit(struct sk_buff *skb,
return ret;
}
- if (macb_dma_ptp(bp) &&
+ if (macb_dma_ptp(bp->caps) &&
(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP))
skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
@@ -2645,7 +2670,7 @@ static unsigned int macb_rx_buffer_size(struct macb *bp, unsigned int mtu)
{
unsigned int size;
- if (!macb_is_gem(bp)) {
+ if (!macb_is_gem(bp->caps)) {
size = MACB_RX_BUFFER_SIZE;
} else {
size = mtu + ETH_HLEN + ETH_FCS_LEN + NET_IP_ALIGN;
@@ -2663,33 +2688,32 @@ static unsigned int macb_rx_buffer_size(struct macb *bp, unsigned int mtu)
return size;
}
-static void gem_free_rx_buffers(struct macb *bp)
+static void gem_free_rx_buffers(struct macb_context *ctx)
{
+ struct device *dev = &ctx->info->pdev->dev;
struct macb_dma_desc *desc;
- struct macb_queue *queue;
struct macb_rxq *rxq;
struct sk_buff *skb;
dma_addr_t addr;
unsigned int q;
int i;
- for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) {
- rxq = &bp->ctx->rxq[q];
+ for (q = 0; q < ctx->info->num_queues; ++q) {
+ rxq = &ctx->rxq[q];
if (!rxq->skbuff)
continue;
- for (i = 0; i < bp->ctx->rx_ring_size; i++) {
+ for (i = 0; i < ctx->rx_ring_size; i++) {
skb = rxq->skbuff[i];
if (!skb)
continue;
- desc = macb_rx_desc(queue, i);
- addr = macb_get_addr(bp, desc);
+ desc = macb_rx_desc(ctx, q, i);
+ addr = macb_get_addr(ctx, desc);
- dma_unmap_single(&bp->pdev->dev, addr,
- bp->ctx->rx_buffer_size,
+ dma_unmap_single(dev, addr, ctx->rx_buffer_size,
DMA_FROM_DEVICE);
dev_kfree_skb_any(skb);
skb = NULL;
@@ -2700,52 +2724,52 @@ static void gem_free_rx_buffers(struct macb *bp)
}
}
-static void macb_free_rx_buffers(struct macb *bp)
+static void macb_free_rx_buffers(struct macb_context *ctx)
{
- struct macb_rxq *rxq = &bp->ctx->rxq[0];
+ struct device *dev = &ctx->info->pdev->dev;
+ struct macb_rxq *rxq = &ctx->rxq[0];
if (rxq->buffers) {
- dma_free_coherent(&bp->pdev->dev,
- bp->ctx->rx_ring_size *
- bp->ctx->rx_buffer_size,
+ dma_free_coherent(dev,
+ ctx->rx_ring_size * ctx->rx_buffer_size,
rxq->buffers, rxq->buffers_dma);
rxq->buffers = NULL;
}
}
-static unsigned int macb_tx_ring_size_per_queue(struct macb *bp)
+static unsigned int macb_tx_ring_size_per_queue(struct macb_context *ctx)
{
- return macb_dma_desc_get_size(bp) * bp->ctx->tx_ring_size +
- bp->tx_bd_rd_prefetch;
+ return macb_dma_desc_get_size(ctx->info->caps) * ctx->tx_ring_size +
+ ctx->info->tx_bd_rd_prefetch;
}
-static unsigned int macb_rx_ring_size_per_queue(struct macb *bp)
+static unsigned int macb_rx_ring_size_per_queue(struct macb_context *ctx)
{
- return macb_dma_desc_get_size(bp) * bp->ctx->rx_ring_size +
- bp->rx_bd_rd_prefetch;
+ return macb_dma_desc_get_size(ctx->info->caps) * ctx->rx_ring_size +
+ ctx->info->rx_bd_rd_prefetch;
}
-static void macb_free_consistent(struct macb *bp)
+static void macb_free_consistent(struct macb_context *ctx)
{
- struct device *dev = &bp->pdev->dev;
+ struct device *dev = &ctx->info->pdev->dev;
struct macb_txq *txq;
struct macb_rxq *rxq;
unsigned int q;
size_t size;
- bp->macbgem_ops.mog_free_rx_buffers(bp);
+ ctx->info->macbgem_ops.mog_free_rx_buffers(ctx);
- txq = &bp->ctx->txq[0];
- size = bp->num_queues * macb_tx_ring_size_per_queue(bp);
+ txq = &ctx->txq[0];
+ size = ctx->info->num_queues * macb_tx_ring_size_per_queue(ctx);
dma_free_coherent(dev, size, txq->ring, txq->ring_dma);
- rxq = &bp->ctx->rxq[0];
- size = bp->num_queues * macb_rx_ring_size_per_queue(bp);
+ rxq = &ctx->rxq[0];
+ size = ctx->info->num_queues * macb_rx_ring_size_per_queue(ctx);
dma_free_coherent(dev, size, rxq->ring, rxq->ring_dma);
- for (q = 0; q < bp->num_queues; ++q) {
- txq = &bp->ctx->txq[q];
- rxq = &bp->ctx->rxq[q];
+ for (q = 0; q < ctx->info->num_queues; ++q) {
+ txq = &ctx->txq[q];
+ rxq = &ctx->rxq[q];
kfree(txq->skb);
txq->skb = NULL;
@@ -2754,46 +2778,48 @@ static void macb_free_consistent(struct macb *bp)
}
}
-static int gem_alloc_rx_buffers(struct macb *bp)
+static int gem_alloc_rx_buffers(struct macb_context *ctx)
{
struct macb_rxq *rxq;
unsigned int q;
int size;
- for (q = 0; q < bp->num_queues; ++q) {
- rxq = &bp->ctx->rxq[q];
- size = bp->ctx->rx_ring_size * sizeof(struct sk_buff *);
+ for (q = 0; q < ctx->info->num_queues; ++q) {
+ rxq = &ctx->rxq[q];
+ size = ctx->rx_ring_size * sizeof(struct sk_buff *);
rxq->skbuff = kzalloc(size, GFP_KERNEL);
if (!rxq->skbuff)
return -ENOMEM;
else
- netdev_dbg(bp->netdev,
+ netdev_dbg(ctx->info->netdev,
"Allocated %d RX struct sk_buff entries at %p\n",
- bp->ctx->rx_ring_size, rxq->skbuff);
+ ctx->rx_ring_size, rxq->skbuff);
}
return 0;
}
-static int macb_alloc_rx_buffers(struct macb *bp)
+static int macb_alloc_rx_buffers(struct macb_context *ctx)
{
- struct macb_rxq *rxq = &bp->ctx->rxq[0];
+ struct device *dev = &ctx->info->pdev->dev;
+ struct macb_rxq *rxq = &ctx->rxq[0];
int size;
- size = bp->ctx->rx_ring_size * bp->ctx->rx_buffer_size;
- rxq->buffers = dma_alloc_coherent(&bp->pdev->dev, size,
+ size = ctx->rx_ring_size * ctx->rx_buffer_size;
+ rxq->buffers = dma_alloc_coherent(dev, size,
&rxq->buffers_dma, GFP_KERNEL);
if (!rxq->buffers)
return -ENOMEM;
- netdev_dbg(bp->netdev,
+ netdev_dbg(ctx->info->netdev,
"Allocated RX buffers of %d bytes at %08lx (mapped %p)\n",
size, (unsigned long)rxq->buffers_dma, rxq->buffers);
return 0;
}
-static int macb_alloc_consistent(struct macb *bp)
+static int macb_alloc_consistent(struct macb_context *ctx)
{
- struct device *dev = &bp->pdev->dev;
+ unsigned int num_queues = ctx->info->num_queues;
+ struct device *dev = &ctx->info->pdev->dev;
dma_addr_t tx_dma, rx_dma;
struct macb_txq *txq;
struct macb_rxq *rxq;
@@ -2808,89 +2834,90 @@ static int macb_alloc_consistent(struct macb *bp)
* natural alignment of physical addresses.
*/
- size = bp->num_queues * macb_tx_ring_size_per_queue(bp);
+ size = num_queues * macb_tx_ring_size_per_queue(ctx);
tx = dma_alloc_coherent(dev, size, &tx_dma, GFP_KERNEL);
if (!tx || upper_32_bits(tx_dma) != upper_32_bits(tx_dma + size - 1))
goto out_err;
- netdev_dbg(bp->netdev, "Allocated %zu bytes for %u TX rings at %08lx (mapped %p)\n",
- size, bp->num_queues, (unsigned long)tx_dma, tx);
+ netdev_dbg(ctx->info->netdev,
+ "Allocated %zu bytes for %u TX rings at %08lx (mapped %p)\n",
+ size, num_queues, (unsigned long)tx_dma, tx);
- size = bp->num_queues * macb_rx_ring_size_per_queue(bp);
+ size = num_queues * macb_rx_ring_size_per_queue(ctx);
rx = dma_alloc_coherent(dev, size, &rx_dma, GFP_KERNEL);
if (!rx || upper_32_bits(rx_dma) != upper_32_bits(rx_dma + size - 1))
goto out_err;
- netdev_dbg(bp->netdev, "Allocated %zu bytes for %u RX rings at %08lx (mapped %p)\n",
- size, bp->num_queues, (unsigned long)rx_dma, rx);
+ netdev_dbg(ctx->info->netdev,
+ "Allocated %zu bytes for %u RX rings at %08lx (mapped %p)\n",
+ size, num_queues, (unsigned long)rx_dma, rx);
- for (q = 0; q < bp->num_queues; ++q) {
- txq = &bp->ctx->txq[q];
- rxq = &bp->ctx->rxq[q];
+ for (q = 0; q < num_queues; ++q) {
+ txq = &ctx->txq[q];
+ rxq = &ctx->rxq[q];
- txq->ring = tx + macb_tx_ring_size_per_queue(bp) * q;
- txq->ring_dma = tx_dma + macb_tx_ring_size_per_queue(bp) * q;
+ txq->ring = tx + macb_tx_ring_size_per_queue(ctx) * q;
+ txq->ring_dma = tx_dma + macb_tx_ring_size_per_queue(ctx) * q;
- rxq->ring = rx + macb_rx_ring_size_per_queue(bp) * q;
- rxq->ring_dma = rx_dma + macb_rx_ring_size_per_queue(bp) * q;
+ rxq->ring = rx + macb_rx_ring_size_per_queue(ctx) * q;
+ rxq->ring_dma = rx_dma + macb_rx_ring_size_per_queue(ctx) * q;
- size = bp->ctx->tx_ring_size * sizeof(struct macb_tx_skb);
+ size = ctx->tx_ring_size * sizeof(struct macb_tx_skb);
txq->skb = kmalloc(size, GFP_KERNEL);
if (!txq->skb)
goto out_err;
}
- if (bp->macbgem_ops.mog_alloc_rx_buffers(bp))
+ if (ctx->info->macbgem_ops.mog_alloc_rx_buffers(ctx))
goto out_err;
return 0;
out_err:
- macb_free_consistent(bp);
+ macb_free_consistent(ctx);
return -ENOMEM;
}
-static void gem_init_rx_ring(struct macb_queue *queue)
+static void gem_init_rx_ring(struct macb_context *ctx, unsigned int q)
{
- struct macb_rxq *rxq = macb_rxq(queue);
+ struct macb_rxq *rxq = &ctx->rxq[q];
rxq->tail = 0;
rxq->prepared_head = 0;
- gem_rx_refill(queue);
+ gem_rx_refill(ctx, q);
}
-static void gem_init_rings(struct macb *bp)
+static void gem_init_rings(struct macb_context *ctx)
{
- struct macb_queue *queue;
struct macb_dma_desc *desc = NULL;
struct macb_txq *txq;
unsigned int q;
int i;
- for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) {
- txq = &bp->ctx->txq[q];
- for (i = 0; i < bp->ctx->tx_ring_size; i++) {
- desc = macb_tx_desc(queue, i);
- macb_set_addr(bp, desc, 0);
+ for (q = 0; q < ctx->info->num_queues; ++q) {
+ txq = &ctx->txq[q];
+ for (i = 0; i < ctx->tx_ring_size; i++) {
+ desc = macb_tx_desc(ctx, q, i);
+ macb_set_addr(ctx, desc, 0);
desc->ctrl = MACB_BIT(TX_USED);
}
desc->ctrl |= MACB_BIT(TX_WRAP);
txq->head = 0;
txq->tail = 0;
- gem_init_rx_ring(queue);
+ gem_init_rx_ring(ctx, q);
}
}
-static void macb_init_rings(struct macb *bp)
+static void macb_init_rings(struct macb_context *ctx)
{
- struct macb_txq *txq = &bp->ctx->txq[0];
+ struct macb_txq *txq = &ctx->txq[0];
struct macb_dma_desc *desc = NULL;
int i;
- macb_init_rx_ring(&bp->queues[0]);
+ macb_init_rx_ring(ctx, 0);
- for (i = 0; i < bp->ctx->tx_ring_size; i++) {
- desc = macb_tx_desc(&bp->queues[0], i);
- macb_set_addr(bp, desc, 0);
+ for (i = 0; i < ctx->tx_ring_size; i++) {
+ desc = macb_tx_desc(ctx, 0, i);
+ macb_set_addr(ctx, desc, 0);
desc->ctrl = MACB_BIT(TX_USED);
}
txq->head = 0;
@@ -2960,7 +2987,7 @@ static u32 macb_mdc_clk_div(struct macb *bp)
u32 config;
unsigned long pclk_hz;
- if (macb_is_gem(bp))
+ if (macb_is_gem(bp->caps))
return gem_mdc_clk_div(bp);
pclk_hz = clk_get_rate(bp->pclk);
@@ -2982,7 +3009,7 @@ static u32 macb_mdc_clk_div(struct macb *bp)
*/
static u32 macb_dbw(struct macb *bp)
{
- if (!macb_is_gem(bp))
+ if (!macb_is_gem(bp->caps))
return 0;
switch (GEM_BFEXT(DBWDEF, gem_readl(bp, DCFG1))) {
@@ -3011,7 +3038,7 @@ static void macb_configure_dma(struct macb *bp)
u32 dmacfg;
buffer_size = bp->ctx->rx_buffer_size / RX_BUFFER_MULTIPLE;
- if (macb_is_gem(bp)) {
+ if (macb_is_gem(bp->caps)) {
dmacfg = gem_readl(bp, DMACFG) & ~GEM_BF(RXBS, -1L);
for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) {
if (q)
@@ -3035,9 +3062,9 @@ static void macb_configure_dma(struct macb *bp)
dmacfg &= ~GEM_BIT(TXCOEN);
dmacfg &= ~GEM_BIT(ADDR64);
- if (macb_dma64(bp))
+ if (macb_dma64(bp->caps))
dmacfg |= GEM_BIT(ADDR64);
- if (macb_dma_ptp(bp))
+ if (macb_dma_ptp(bp->caps))
dmacfg |= GEM_BIT(RXEXT) | GEM_BIT(TXEXT);
netdev_dbg(bp->netdev, "Cadence configure DMA with 0x%08x\n",
dmacfg);
@@ -3065,7 +3092,7 @@ static void macb_init_hw(struct macb *bp)
config |= MACB_BIT(BIG); /* Receive oversized frames */
if (bp->netdev->flags & IFF_PROMISC)
config |= MACB_BIT(CAF); /* Copy All Frames */
- else if (macb_is_gem(bp) && bp->netdev->features & NETIF_F_RXCSUM)
+ else if (macb_is_gem(bp->caps) && bp->netdev->features & NETIF_F_RXCSUM)
config |= GEM_BIT(RXCOEN);
if (!(bp->netdev->flags & IFF_BROADCAST))
config |= MACB_BIT(NBC); /* No BroadCast */
@@ -3173,14 +3200,14 @@ static void macb_set_rx_mode(struct net_device *netdev)
cfg |= MACB_BIT(CAF);
/* Disable RX checksum offload */
- if (macb_is_gem(bp))
+ if (macb_is_gem(bp->caps))
cfg &= ~GEM_BIT(RXCOEN);
} else {
/* Disable promiscuous mode */
cfg &= ~MACB_BIT(CAF);
/* Enable RX checksum offload only if requested */
- if (macb_is_gem(bp) && netdev->features & NETIF_F_RXCSUM)
+ if (macb_is_gem(bp->caps) && netdev->features & NETIF_F_RXCSUM)
cfg |= GEM_BIT(RXCOEN);
}
@@ -3222,19 +3249,21 @@ static int macb_open(struct net_device *netdev)
goto pm_exit;
}
+ bp->ctx->info = &bp->info;
+
/* RX buffers initialization */
bp->ctx->rx_buffer_size = macb_rx_buffer_size(bp, netdev->mtu);
bp->ctx->rx_ring_size = bp->configured_rx_ring_size;
bp->ctx->tx_ring_size = bp->configured_tx_ring_size;
- err = macb_alloc_consistent(bp);
+ err = macb_alloc_consistent(bp->ctx);
if (err) {
netdev_err(netdev, "Unable to allocate DMA memory (error %d)\n",
err);
goto free_ctx;
}
- bp->macbgem_ops.mog_init_rings(bp);
+ bp->macbgem_ops.mog_init_rings(bp->ctx);
macb_init_buffers(bp);
for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) {
@@ -3272,7 +3301,7 @@ static int macb_open(struct net_device *netdev)
napi_disable(&queue->napi_rx);
napi_disable(&queue->napi_tx);
}
- macb_free_consistent(bp);
+ macb_free_consistent(bp->ctx);
free_ctx:
kfree(bp->ctx);
bp->ctx = NULL;
@@ -3308,7 +3337,7 @@ static int macb_close(struct net_device *netdev)
netif_carrier_off(netdev);
spin_unlock_irqrestore(&bp->lock, flags);
- macb_free_consistent(bp);
+ macb_free_consistent(bp->ctx);
kfree(bp->ctx);
bp->ctx = NULL;
@@ -3461,7 +3490,7 @@ static void macb_get_stats(struct net_device *netdev,
struct macb_stats *hwstat = &bp->hw_stats.macb;
netdev_stats_to_stats64(nstat, &bp->netdev->stats);
- if (macb_is_gem(bp)) {
+ if (macb_is_gem(bp->caps)) {
gem_get_stats(bp, nstat);
return;
}
@@ -3684,8 +3713,8 @@ static void macb_get_regs(struct net_device *netdev, struct ethtool_regs *regs,
regs->version = (macb_readl(bp, MID) & ((1 << MACB_REV_SIZE) - 1))
| MACB_GREGS_VERSION;
- tail = macb_tx_ring_wrap(bp, txq->tail);
- head = macb_tx_ring_wrap(bp, txq->head);
+ tail = macb_tx_ring_wrap(bp->ctx, txq->tail);
+ head = macb_tx_ring_wrap(bp->ctx, txq->head);
regs_buff[0] = macb_readl(bp, NCR);
regs_buff[1] = macb_or_gem_readl(bp, NCFGR);
@@ -3703,7 +3732,7 @@ static void macb_get_regs(struct net_device *netdev, struct ethtool_regs *regs,
if (!(bp->caps & MACB_CAPS_USRIO_DISABLED))
regs_buff[12] = macb_or_gem_readl(bp, USRIO);
- if (macb_is_gem(bp))
+ if (macb_is_gem(bp->caps))
regs_buff[13] = gem_readl(bp, DMACFG);
}
@@ -3835,7 +3864,7 @@ static int gem_get_ts_info(struct net_device *netdev,
{
struct macb *bp = netdev_priv(netdev);
- if (!macb_dma_ptp(bp)) {
+ if (!macb_dma_ptp(bp->caps)) {
ethtool_op_get_ts_info(netdev, info);
return 0;
}
@@ -3936,7 +3965,7 @@ static void gem_prog_cmp_regs(struct macb *bp, struct ethtool_rx_flow_spec *fs)
bool cmp_b = false;
bool cmp_c = false;
- if (!macb_is_gem(bp))
+ if (!macb_is_gem(bp->caps))
return;
tp4sp_v = &(fs->h_u.tcp_ip4_spec);
@@ -4297,7 +4326,7 @@ static inline void macb_set_txcsum_feature(struct macb *bp,
{
u32 val;
- if (!macb_is_gem(bp))
+ if (!macb_is_gem(bp->caps))
return;
val = gem_readl(bp, DMACFG);
@@ -4315,7 +4344,7 @@ static inline void macb_set_rxcsum_feature(struct macb *bp,
struct net_device *netdev = bp->netdev;
u32 val;
- if (!macb_is_gem(bp))
+ if (!macb_is_gem(bp->caps))
return;
val = gem_readl(bp, NCFGR);
@@ -4330,7 +4359,7 @@ static inline void macb_set_rxcsum_feature(struct macb *bp,
static inline void macb_set_rxflow_feature(struct macb *bp,
netdev_features_t features)
{
- if (!macb_is_gem(bp))
+ if (!macb_is_gem(bp->caps))
return;
gem_enable_flow_filters(bp, !!(features & NETIF_F_NTUPLE));
@@ -4649,7 +4678,7 @@ static void macb_configure_caps(struct macb *bp,
bp->caps |= MACB_CAPS_FIFO_MODE;
if (GEM_BFEXT(PBUF_RSC, gem_readl(bp, DCFG6)))
bp->caps |= MACB_CAPS_RSC;
- if (gem_has_ptp(bp)) {
+ if (gem_has_ptp(bp->caps)) {
if (!GEM_BFEXT(TSU, gem_readl(bp, DCFG5)))
dev_err(&bp->pdev->dev,
"GEM doesn't support hardware ptp.\n");
@@ -4861,7 +4890,7 @@ static int macb_init_dflt(struct platform_device *pdev)
netdev->netdev_ops = &macb_netdev_ops;
/* setup appropriated routines according to adapter type */
- if (macb_is_gem(bp)) {
+ if (macb_is_gem(bp->caps)) {
bp->macbgem_ops.mog_alloc_rx_buffers = gem_alloc_rx_buffers;
bp->macbgem_ops.mog_free_rx_buffers = gem_free_rx_buffers;
bp->macbgem_ops.mog_init_rings = gem_init_rings;
@@ -4890,7 +4919,7 @@ static int macb_init_dflt(struct platform_device *pdev)
netdev->hw_features |= MACB_NETIF_LSO;
/* Checksum offload is only available on gem with packet buffer */
- if (macb_is_gem(bp) && !(bp->caps & MACB_CAPS_FIFO_MODE))
+ if (macb_is_gem(bp->caps) && !(bp->caps & MACB_CAPS_FIFO_MODE))
netdev->hw_features |= NETIF_F_HW_CSUM | NETIF_F_RXCSUM;
if (bp->caps & MACB_CAPS_SG_DISABLED)
netdev->hw_features &= ~NETIF_F_SG;
@@ -5009,7 +5038,7 @@ static int at91ether_alloc_coherent(struct macb *bp)
rxq->ring = dma_alloc_coherent(&bp->pdev->dev,
(AT91ETHER_MAX_RX_DESCR *
- macb_dma_desc_get_size(bp)),
+ macb_dma_desc_get_size(bp->caps)),
&rxq->ring_dma, GFP_KERNEL);
if (!rxq->ring)
return -ENOMEM;
@@ -5022,7 +5051,7 @@ static int at91ether_alloc_coherent(struct macb *bp)
if (!rxq->buffers) {
dma_free_coherent(&bp->pdev->dev,
AT91ETHER_MAX_RX_DESCR *
- macb_dma_desc_get_size(bp),
+ macb_dma_desc_get_size(bp->caps),
rxq->ring, rxq->ring_dma);
rxq->ring = NULL;
return -ENOMEM;
@@ -5038,7 +5067,7 @@ static void at91ether_free_coherent(struct macb *bp)
if (rxq->ring) {
dma_free_coherent(&bp->pdev->dev,
AT91ETHER_MAX_RX_DESCR *
- macb_dma_desc_get_size(bp),
+ macb_dma_desc_get_size(bp->caps),
rxq->ring, rxq->ring_dma);
rxq->ring = NULL;
}
@@ -5055,7 +5084,6 @@ static void at91ether_free_coherent(struct macb *bp)
/* Initialize and start the Receiver and Transmit subsystems */
static int at91ether_start(struct macb *bp)
{
- struct macb_queue *queue = &bp->queues[0];
struct macb_rxq *rxq = &bp->ctx->rxq[0];
struct macb_dma_desc *desc;
dma_addr_t addr;
@@ -5068,8 +5096,8 @@ static int at91ether_start(struct macb *bp)
addr = rxq->buffers_dma;
for (i = 0; i < AT91ETHER_MAX_RX_DESCR; i++) {
- desc = macb_rx_desc(queue, i);
- macb_set_addr(bp, desc, addr);
+ desc = macb_rx_desc(bp->ctx, 0, i);
+ macb_set_addr(bp->ctx, desc, addr);
desc->ctrl = 0;
addr += AT91ETHER_MAX_RBUFF_SZ;
}
@@ -5218,14 +5246,13 @@ static netdev_tx_t at91ether_start_xmit(struct sk_buff *skb,
static void at91ether_rx(struct net_device *netdev)
{
struct macb *bp = netdev_priv(netdev);
- struct macb_queue *queue = &bp->queues[0];
struct macb_rxq *rxq = &bp->ctx->rxq[0];
struct macb_dma_desc *desc;
unsigned char *p_recv;
struct sk_buff *skb;
unsigned int pktlen;
- desc = macb_rx_desc(queue, rxq->tail);
+ desc = macb_rx_desc(bp->ctx, 0, rxq->tail);
while (desc->addr & MACB_BIT(RX_USED)) {
p_recv = rxq->buffers + rxq->tail * AT91ETHER_MAX_RBUFF_SZ;
pktlen = MACB_BF(RX_FRMLEN, desc->ctrl);
@@ -5254,7 +5281,7 @@ static void at91ether_rx(struct net_device *netdev)
else
rxq->tail++;
- desc = macb_rx_desc(queue, rxq->tail);
+ desc = macb_rx_desc(bp->ctx, 0, rxq->tail);
}
}
@@ -5584,7 +5611,7 @@ static int macb_alloc_tieoff(struct macb *bp)
return 0;
bp->rx_ring_tieoff = dma_alloc_coherent(&bp->pdev->dev,
- macb_dma_desc_get_size(bp),
+ macb_dma_desc_get_size(bp->caps),
&bp->rx_ring_tieoff_dma,
GFP_KERNEL);
if (!bp->rx_ring_tieoff)
@@ -5598,7 +5625,7 @@ static void macb_free_tieoff(struct macb *bp)
if (!bp->rx_ring_tieoff)
return;
- dma_free_coherent(&bp->pdev->dev, macb_dma_desc_get_size(bp),
+ dma_free_coherent(&bp->pdev->dev, macb_dma_desc_get_size(bp->caps),
bp->rx_ring_tieoff,
bp->rx_ring_tieoff_dma);
bp->rx_ring_tieoff = NULL;
@@ -5986,12 +6013,12 @@ static int macb_probe(struct platform_device *pdev)
val = GEM_BFEXT(RXBD_RDBUFF, gem_readl(bp, DCFG10));
if (val)
bp->rx_bd_rd_prefetch = (2 << (val - 1)) *
- macb_dma_desc_get_size(bp);
+ macb_dma_desc_get_size(bp->caps);
val = GEM_BFEXT(TXBD_RDBUFF, gem_readl(bp, DCFG10));
if (val)
bp->tx_bd_rd_prefetch = (2 << (val - 1)) *
- macb_dma_desc_get_size(bp);
+ macb_dma_desc_get_size(bp->caps);
}
bp->rx_intr_mask = MACB_RX_INT_FLAGS;
@@ -6036,7 +6063,7 @@ static int macb_probe(struct platform_device *pdev)
INIT_DELAYED_WORK(&bp->tx_lpi_work, macb_tx_lpi_work_fn);
netdev_info(netdev, "Cadence %s rev 0x%08x at 0x%08lx irq %d (%pM)\n",
- macb_is_gem(bp) ? "GEM" : "MACB", macb_readl(bp, MID),
+ macb_is_gem(bp->caps) ? "GEM" : "MACB", macb_readl(bp, MID),
netdev->base_addr, netdev->irq, netdev->dev_addr);
pm_runtime_put_autosuspend(&bp->pdev->dev);
@@ -6171,7 +6198,7 @@ static int __maybe_unused macb_suspend(struct device *dev)
* Enable WoL IRQ on queue 0
*/
devm_free_irq(dev, bp->queues[0].irq, bp->queues);
- if (macb_is_gem(bp)) {
+ if (macb_is_gem(bp->caps)) {
err = devm_request_irq(dev, bp->queues[0].irq, gem_wol_interrupt,
IRQF_SHARED, netdev->name, bp->queues);
if (err) {
@@ -6236,6 +6263,7 @@ static int __maybe_unused macb_resume(struct device *dev)
{
struct net_device *netdev = dev_get_drvdata(dev);
struct macb *bp = netdev_priv(netdev);
+ struct macb_context *ctx = bp->ctx;
struct macb_queue *queue;
unsigned long flags;
unsigned int q;
@@ -6253,7 +6281,7 @@ static int __maybe_unused macb_resume(struct device *dev)
if (bp->wol & MACB_WOL_ENABLED) {
spin_lock_irqsave(&bp->lock, flags);
/* Disable WoL */
- if (macb_is_gem(bp)) {
+ if (macb_is_gem(bp->caps)) {
queue_writel(bp->queues, IDR, GEM_BIT(WOL));
gem_writel(bp, WOL, 0);
} else {
@@ -6293,10 +6321,10 @@ static int __maybe_unused macb_resume(struct device *dev)
for (q = 0, queue = bp->queues; q < bp->num_queues;
++q, ++queue) {
if (!(bp->caps & MACB_CAPS_MACB_IS_EMAC)) {
- if (macb_is_gem(bp))
- gem_init_rx_ring(queue);
+ if (macb_is_gem(bp->caps))
+ gem_init_rx_ring(ctx, q);
else
- macb_init_rx_ring(queue);
+ macb_init_rx_ring(ctx, q);
}
napi_enable(&queue->napi_rx);
diff --git a/drivers/net/ethernet/cadence/macb_ptp.c b/drivers/net/ethernet/cadence/macb_ptp.c
index e5195d7dac1d..2070508fd2e0 100644
--- a/drivers/net/ethernet/cadence/macb_ptp.c
+++ b/drivers/net/ethernet/cadence/macb_ptp.c
@@ -28,10 +28,10 @@
static struct macb_dma_desc_ptp *macb_ptp_desc(struct macb *bp,
struct macb_dma_desc *desc)
{
- if (!macb_dma_ptp(bp))
+ if (!macb_dma_ptp(bp->caps))
return NULL;
- if (macb_dma64(bp))
+ if (macb_dma64(bp->caps))
return (struct macb_dma_desc_ptp *)
((u8 *)desc + sizeof(struct macb_dma_desc)
+ sizeof(struct macb_dma_desc_64));
@@ -384,7 +384,7 @@ int gem_get_hwtst(struct net_device *netdev,
struct macb *bp = netdev_priv(netdev);
*tstamp_config = bp->tstamp_config;
- if (!macb_dma_ptp(bp))
+ if (!macb_dma_ptp(bp->caps))
return -EOPNOTSUPP;
return 0;
@@ -411,7 +411,7 @@ int gem_set_hwtst(struct net_device *netdev,
struct macb *bp = netdev_priv(netdev);
u32 regval;
- if (!macb_dma_ptp(bp))
+ if (!macb_dma_ptp(bp->caps))
return -EOPNOTSUPP;
switch (tstamp_config->tx_type) {
--
2.53.0
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH net-next 09/11] net: macb: introduce macb_context_alloc() helper
2026-04-01 16:39 [PATCH net-next 00/11] net: macb: implement context swapping Théo Lebrun
` (7 preceding siblings ...)
2026-04-01 16:39 ` [PATCH net-next 08/11] net: macb: make `struct macb` subset reachable from macb_context struct Théo Lebrun
@ 2026-04-01 16:39 ` Théo Lebrun
2026-04-01 16:39 ` [PATCH net-next 10/11] net: macb: use context swapping in .set_ringparam() Théo Lebrun
` (2 subsequent siblings)
11 siblings, 0 replies; 24+ messages in thread
From: Théo Lebrun @ 2026-04-01 16:39 UTC (permalink / raw)
To: Nicolas Ferre, Claudiu Beznea, Andrew Lunn, David S. Miller,
Eric Dumazet, Jakub Kicinski, Paolo Abeni, Richard Cochran,
Russell King
Cc: Paolo Valerio, Conor Dooley, Nicolai Buchwitz,
Vladimir Kondratiev, Gregory CLEMENT, Benoît Monin,
Tawfik Bayouk, Thomas Petazzoni, Maxime Chevallier, netdev,
linux-kernel, Théo Lebrun
Move the context allocation sequence from inline macb_open() to its own
helper function called macb_context_alloc(). All ops doing context
swapping (set_ringparam, change_mtu, etc) will use this helper.
Signed-off-by: Théo Lebrun <theo.lebrun@bootlin.com>
---
drivers/net/ethernet/cadence/macb_main.c | 55 +++++++++++++++++++++-----------
1 file changed, 36 insertions(+), 19 deletions(-)
diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
index 47f0d27cd979..42b19b969f3e 100644
--- a/drivers/net/ethernet/cadence/macb_main.c
+++ b/drivers/net/ethernet/cadence/macb_main.c
@@ -2875,6 +2875,36 @@ static int macb_alloc_consistent(struct macb_context *ctx)
return -ENOMEM;
}
+static struct macb_context *macb_context_alloc(struct macb *bp,
+ unsigned int mtu,
+ unsigned int rx_ring_size,
+ unsigned int tx_ring_size)
+{
+ struct macb_context *ctx;
+ int err;
+
+ ctx = kzalloc_obj(*ctx);
+ if (!ctx)
+ return ERR_PTR(-ENOMEM);
+
+ ctx->info = &bp->info;
+ ctx->rx_buffer_size = macb_rx_buffer_size(bp, mtu);
+ ctx->rx_ring_size = rx_ring_size;
+ ctx->tx_ring_size = tx_ring_size;
+
+ err = macb_alloc_consistent(ctx);
+ if (err) {
+ netdev_err(bp->netdev,
+ "Unable to allocate DMA memory (error %d)\n", err);
+ kfree(ctx);
+ return ERR_PTR(err);
+ }
+
+ bp->macbgem_ops.mog_init_rings(ctx);
+
+ return ctx;
+}
+
static void gem_init_rx_ring(struct macb_context *ctx, unsigned int q)
{
struct macb_rxq *rxq = &ctx->rxq[q];
@@ -3243,27 +3273,15 @@ static int macb_open(struct net_device *netdev)
if (err < 0)
return err;
- bp->ctx = kzalloc_obj(*bp->ctx);
- if (!bp->ctx) {
- err = -ENOMEM;
+ bp->ctx = macb_context_alloc(bp, netdev->mtu,
+ bp->configured_rx_ring_size,
+ bp->configured_tx_ring_size);
+ if (IS_ERR(bp->ctx)) {
+ err = PTR_ERR(bp->ctx);
+ bp->ctx = NULL;
goto pm_exit;
}
- bp->ctx->info = &bp->info;
-
- /* RX buffers initialization */
- bp->ctx->rx_buffer_size = macb_rx_buffer_size(bp, netdev->mtu);
- bp->ctx->rx_ring_size = bp->configured_rx_ring_size;
- bp->ctx->tx_ring_size = bp->configured_tx_ring_size;
-
- err = macb_alloc_consistent(bp->ctx);
- if (err) {
- netdev_err(netdev, "Unable to allocate DMA memory (error %d)\n",
- err);
- goto free_ctx;
- }
-
- bp->macbgem_ops.mog_init_rings(bp->ctx);
macb_init_buffers(bp);
for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) {
@@ -3302,7 +3320,6 @@ static int macb_open(struct net_device *netdev)
napi_disable(&queue->napi_tx);
}
macb_free_consistent(bp->ctx);
-free_ctx:
kfree(bp->ctx);
bp->ctx = NULL;
pm_exit:
--
2.53.0
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH net-next 10/11] net: macb: use context swapping in .set_ringparam()
2026-04-01 16:39 [PATCH net-next 00/11] net: macb: implement context swapping Théo Lebrun
` (8 preceding siblings ...)
2026-04-01 16:39 ` [PATCH net-next 09/11] net: macb: introduce macb_context_alloc() helper Théo Lebrun
@ 2026-04-01 16:39 ` Théo Lebrun
2026-04-01 20:17 ` Maxime Chevallier
2026-04-02 11:29 ` Nicolai Buchwitz
2026-04-01 16:39 ` [PATCH net-next 11/11] net: macb: use context swapping in .ndo_change_mtu() Théo Lebrun
2026-04-02 11:35 ` [PATCH net-next 00/11] net: macb: implement context swapping Nicolai Buchwitz
11 siblings, 2 replies; 24+ messages in thread
From: Théo Lebrun @ 2026-04-01 16:39 UTC (permalink / raw)
To: Nicolas Ferre, Claudiu Beznea, Andrew Lunn, David S. Miller,
Eric Dumazet, Jakub Kicinski, Paolo Abeni, Richard Cochran,
Russell King
Cc: Paolo Valerio, Conor Dooley, Nicolai Buchwitz,
Vladimir Kondratiev, Gregory CLEMENT, Benoît Monin,
Tawfik Bayouk, Thomas Petazzoni, Maxime Chevallier, netdev,
linux-kernel, Théo Lebrun
ethtool_ops.set_ringparam() is implemented using the primitive close /
update ring size / reopen sequence. Under memory pressure this does not
fly: we free our buffers at close and cannot reallocate new ones at
open. Also, it triggers a slow PHY reinit.
Instead, exploit the new context mechanism and improve our sequence to:
- allocate a new context (including buffers) first
- if it fails, early return without any impact to the interface
- stop interface
- update global state (bp, netdev, etc)
- pass buffer pointers to the hardware
- start interface
- free old context.
The HW disable sequence is inspired by macb_reset_hw() but avoids
(1) setting NCR bit CLRSTAT and (2) clearing register PBUFRXCUT.
The HW re-enable sequence is inspired by macb_mac_link_up(), skipping
over register writes which would be redundant (because values have not
changed).
The generic context swapping parts are isolated into helper functions
macb_context_swap_start|end(), reusable by other operations (change_mtu,
set_channels, etc).
Signed-off-by: Théo Lebrun <theo.lebrun@bootlin.com>
---
drivers/net/ethernet/cadence/macb_main.c | 89 +++++++++++++++++++++++++++++---
1 file changed, 82 insertions(+), 7 deletions(-)
diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
index 42b19b969f3e..543356554c11 100644
--- a/drivers/net/ethernet/cadence/macb_main.c
+++ b/drivers/net/ethernet/cadence/macb_main.c
@@ -2905,6 +2905,76 @@ static struct macb_context *macb_context_alloc(struct macb *bp,
return ctx;
}
+static void macb_context_swap_start(struct macb *bp)
+{
+ struct macb_queue *queue;
+ unsigned int q;
+ u32 ctrl;
+
+ /* Disable software Tx, disable HW Tx/Rx and disable NAPI. */
+
+ netif_tx_disable(bp->netdev);
+
+ ctrl = macb_readl(bp, NCR);
+ macb_writel(bp, NCR, ctrl & ~(MACB_BIT(RE) | MACB_BIT(TE)));
+
+ macb_writel(bp, TSR, -1);
+ macb_writel(bp, RSR, -1);
+
+ for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) {
+ queue_writel(queue, IDR, -1);
+ queue_readl(queue, ISR);
+ if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
+ queue_writel(queue, ISR, -1);
+ }
+
+ for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) {
+ napi_disable(&queue->napi_rx);
+ napi_disable(&queue->napi_tx);
+ }
+}
+
+static void macb_context_swap_end(struct macb *bp,
+ struct macb_context *new_ctx)
+{
+ struct macb_context *old_ctx;
+ struct macb_queue *queue;
+ unsigned int q;
+ u32 ctrl;
+
+ /* Swap contexts & give buffer pointers to HW. */
+
+ old_ctx = bp->ctx;
+ bp->ctx = new_ctx;
+ macb_init_buffers(bp);
+
+ /* Start NAPI, HW Tx/Rx and software Tx. */
+
+ for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) {
+ napi_enable(&queue->napi_rx);
+ napi_enable(&queue->napi_tx);
+ }
+
+ if (!(bp->caps & MACB_CAPS_MACB_IS_EMAC)) {
+ for (q = 0, queue = bp->queues; q < bp->num_queues;
+ ++q, ++queue) {
+ queue_writel(queue, IER,
+ bp->rx_intr_mask |
+ MACB_TX_INT_FLAGS |
+ MACB_BIT(HRESP));
+ }
+ }
+
+ ctrl = macb_readl(bp, NCR);
+ macb_writel(bp, NCR, ctrl | MACB_BIT(RE) | MACB_BIT(TE));
+
+ netif_tx_start_all_queues(bp->netdev);
+
+ /* Free old context. */
+
+ macb_free_consistent(old_ctx);
+}
+
static void gem_init_rx_ring(struct macb_context *ctx, unsigned int q)
{
struct macb_rxq *rxq = &ctx->rxq[q];
@@ -3819,9 +3889,10 @@ static int macb_set_ringparam(struct net_device *netdev,
struct kernel_ethtool_ringparam *kernel_ring,
struct netlink_ext_ack *extack)
{
+ unsigned int new_rx_size, new_tx_size;
struct macb *bp = netdev_priv(netdev);
- u32 new_rx_size, new_tx_size;
- unsigned int reset = 0;
+ bool running = netif_running(netdev);
+ struct macb_context *new_ctx;
if ((ring->rx_mini_pending) || (ring->rx_jumbo_pending))
return -EINVAL;
@@ -3840,16 +3911,20 @@ static int macb_set_ringparam(struct net_device *netdev,
return 0;
}
- if (netif_running(bp->netdev)) {
- reset = 1;
- macb_close(bp->netdev);
+ if (running) {
+ new_ctx = macb_context_alloc(bp, netdev->mtu,
+ new_rx_size, new_tx_size);
+ if (IS_ERR(new_ctx))
+ return PTR_ERR(new_ctx);
+
+ macb_context_swap_start(bp);
}
bp->configured_rx_ring_size = new_rx_size;
bp->configured_tx_ring_size = new_tx_size;
- if (reset)
- macb_open(bp->netdev);
+ if (running)
+ macb_context_swap_end(bp, new_ctx);
return 0;
}
--
2.53.0
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH net-next 11/11] net: macb: use context swapping in .ndo_change_mtu()
2026-04-01 16:39 [PATCH net-next 00/11] net: macb: implement context swapping Théo Lebrun
` (9 preceding siblings ...)
2026-04-01 16:39 ` [PATCH net-next 10/11] net: macb: use context swapping in .set_ringparam() Théo Lebrun
@ 2026-04-01 16:39 ` Théo Lebrun
2026-04-02 11:30 ` Nicolai Buchwitz
2026-04-02 11:35 ` [PATCH net-next 00/11] net: macb: implement context swapping Nicolai Buchwitz
11 siblings, 1 reply; 24+ messages in thread
From: Théo Lebrun @ 2026-04-01 16:39 UTC (permalink / raw)
To: Nicolas Ferre, Claudiu Beznea, Andrew Lunn, David S. Miller,
Eric Dumazet, Jakub Kicinski, Paolo Abeni, Richard Cochran,
Russell King
Cc: Paolo Valerio, Conor Dooley, Nicolai Buchwitz,
Vladimir Kondratiev, Gregory CLEMENT, Benoît Monin,
Tawfik Bayouk, Thomas Petazzoni, Maxime Chevallier, netdev,
linux-kernel, Théo Lebrun
Use newly introduced context buffer management to implement
.ndo_change_mtu() as a context swap: allocate new context ->
reconfigure HW -> free old context.
This resists memory pressure well by failing without closing the
interface and it is much faster by avoiding PHY reinit.
Signed-off-by: Théo Lebrun <theo.lebrun@bootlin.com>
---
drivers/net/ethernet/cadence/macb_main.c | 18 ++++++++++++++++--
1 file changed, 16 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
index 543356554c11..e10791bf1f4d 100644
--- a/drivers/net/ethernet/cadence/macb_main.c
+++ b/drivers/net/ethernet/cadence/macb_main.c
@@ -3438,11 +3438,25 @@ static int macb_close(struct net_device *netdev)
static int macb_change_mtu(struct net_device *netdev, int new_mtu)
{
- if (netif_running(netdev))
- return -EBUSY;
+ struct macb *bp = netdev_priv(netdev);
+ bool running = netif_running(netdev);
+ struct macb_context *new_ctx;
+
+ if (running) {
+ new_ctx = macb_context_alloc(bp, new_mtu,
+ bp->configured_rx_ring_size,
+ bp->configured_tx_ring_size);
+ if (IS_ERR(new_ctx))
+ return PTR_ERR(new_ctx);
+
+ macb_context_swap_start(bp);
+ }
WRITE_ONCE(netdev->mtu, new_mtu);
+ if (running)
+ macb_context_swap_end(bp, new_ctx);
+
return 0;
}
--
2.53.0
^ permalink raw reply related [flat|nested] 24+ messages in thread
* Re: [PATCH net-next 10/11] net: macb: use context swapping in .set_ringparam()
2026-04-01 16:39 ` [PATCH net-next 10/11] net: macb: use context swapping in .set_ringparam() Théo Lebrun
@ 2026-04-01 20:17 ` Maxime Chevallier
2026-04-02 16:34 ` Théo Lebrun
2026-04-02 11:29 ` Nicolai Buchwitz
1 sibling, 1 reply; 24+ messages in thread
From: Maxime Chevallier @ 2026-04-01 20:17 UTC (permalink / raw)
To: Théo Lebrun, Nicolas Ferre, Claudiu Beznea, Andrew Lunn,
David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
Richard Cochran, Russell King
Cc: Paolo Valerio, Conor Dooley, Nicolai Buchwitz,
Vladimir Kondratiev, Gregory CLEMENT, Benoît Monin,
Tawfik Bayouk, Thomas Petazzoni, netdev, linux-kernel
Hi Théo,
this is nice work !
On 01/04/2026 18:39, Théo Lebrun wrote:
> ethtool_ops.set_ringparam() is implemented using the primitive close /
> update ring size / reopen sequence. Under memory pressure this does not
> fly: we free our buffers at close and cannot reallocate new ones at
> open. Also, it triggers a slow PHY reinit.
>
> Instead, exploit the new context mechanism and improve our sequence to:
> - allocate a new context (including buffers) first
> - if it fails, early return without any impact to the interface
> - stop interface
> - update global state (bp, netdev, etc)
> - pass buffer pointers to the hardware
> - start interface
> - free old context.
>
> The HW disable sequence is inspired by macb_reset_hw() but avoids
> (1) setting NCR bit CLRSTAT and (2) clearing register PBUFRXCUT.
>
> The HW re-enable sequence is inspired by macb_mac_link_up(), skipping
> over register writes which would be redundant (because values have not
> changed).
>
> The generic context swapping parts are isolated into helper functions
> macb_context_swap_start|end(), reusable by other operations (change_mtu,
> set_channels, etc).
>
> Signed-off-by: Théo Lebrun <theo.lebrun@bootlin.com>
> ---
> drivers/net/ethernet/cadence/macb_main.c | 89 +++++++++++++++++++++++++++++---
> 1 file changed, 82 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
> index 42b19b969f3e..543356554c11 100644
> --- a/drivers/net/ethernet/cadence/macb_main.c
> +++ b/drivers/net/ethernet/cadence/macb_main.c
> @@ -2905,6 +2905,76 @@ static struct macb_context *macb_context_alloc(struct macb *bp,
> return ctx;
> }
>
> +static void macb_context_swap_start(struct macb *bp)
> +{
> + struct macb_queue *queue;
> + unsigned int q;
> + u32 ctrl;
> +
> + /* Disable software Tx, disable HW Tx/Rx and disable NAPI. */
> +
> + netif_tx_disable(bp->netdev);
> +
> + ctrl = macb_readl(bp, NCR);
> + macb_writel(bp, NCR, ctrl & ~(MACB_BIT(RE) | MACB_BIT(TE)));
> +
> + macb_writel(bp, TSR, -1);
> + macb_writel(bp, RSR, -1);
> +
> + for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) {
> + queue_writel(queue, IDR, -1);
> + queue_readl(queue, ISR);
> + if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
> + queue_writel(queue, ISR, -1);
> + }
These registers appear to be protected by bp->lock, any chance that this
may race with an interrupt in the middle of them being configured here ?
Maxime
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH net-next 05/11] net: macb: allocate tieoff descriptor once across device lifetime
2026-04-01 16:39 ` [PATCH net-next 05/11] net: macb: allocate tieoff descriptor once across device lifetime Théo Lebrun
@ 2026-04-02 11:14 ` Nicolai Buchwitz
2026-04-02 13:57 ` Théo Lebrun
0 siblings, 1 reply; 24+ messages in thread
From: Nicolai Buchwitz @ 2026-04-02 11:14 UTC (permalink / raw)
To: Théo Lebrun
Cc: Nicolas Ferre, Claudiu Beznea, Andrew Lunn, David S. Miller,
Eric Dumazet, Jakub Kicinski, Paolo Abeni, Richard Cochran,
Russell King, Paolo Valerio, Conor Dooley, Vladimir Kondratiev,
Gregory CLEMENT, Benoît Monin, Tawfik Bayouk,
Thomas Petazzoni, Maxime Chevallier, netdev, linux-kernel
On 1.4.2026 18:39, Théo Lebrun wrote:
> The tieoff descriptor is a RX DMA descriptor ring of size one. It gets
> configured onto queues for Wake-on-LAN during system-wide suspend when
> hardware does not support disabling individual queues
> (MACB_CAPS_QUEUE_DISABLE).
>
> MACB/GEM driver allocates it alongside the main RX ring
> inside macb_alloc_consistent() at open. Free is done by
> macb_free_consistent() at close.
>
> Change to allocate once at probe and free on probe failure or device
> removal. This makes the tieoff descriptor lifetime much longer,
> avoiding repeating coherent buffer allocation on each open/close cycle.
>
> Main benefit: we dissociate its lifetime from the main ring's lifetime.
> That way there is less work to be doing on resources (re)alloc. This
> currently happens on close/open, but will soon also happen on context
> swap operations (set_ringparam, change_mtu, set_channels, etc).
>
> Signed-off-by: Théo Lebrun <theo.lebrun@bootlin.com>
> ---
> drivers/net/ethernet/cadence/macb_main.c | 70
> ++++++++++++++++----------------
> [...]
>
> +static int macb_alloc_tieoff(struct macb *bp)
> +{
> + /* Tieoff is a workaround in case HW cannot disable queues, for PM.
> */
> + if (bp->caps & MACB_CAPS_QUEUE_DISABLE)
> + return 0;
> +
> + bp->rx_ring_tieoff = dma_alloc_coherent(&bp->pdev->dev,
> + macb_dma_desc_get_size(bp),
> + &bp->rx_ring_tieoff_dma,
> + GFP_KERNEL);
> + if (!bp->rx_ring_tieoff)
> + return -ENOMEM;
> +
> + return 0;
> +}
The old macb_init_tieoff() that wrote WRAP+USED into the
descriptor is deleted but its work is not replicated here.
dma_alloc_coherent zeroes the memory, so RX_USED=0 and the
hardware will treat it as a valid receive buffer pointing to
DMA address 0 during suspend.
Shouldn't this have a macb_set_addr() + ctrl=0 after the
allocation?
> [...]
Thanks
Nicolai
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH net-next 06/11] net: macb: introduce macb_context struct for buffer management
2026-04-01 16:39 ` [PATCH net-next 06/11] net: macb: introduce macb_context struct for buffer management Théo Lebrun
@ 2026-04-02 11:22 ` Nicolai Buchwitz
2026-04-02 14:11 ` Théo Lebrun
0 siblings, 1 reply; 24+ messages in thread
From: Nicolai Buchwitz @ 2026-04-02 11:22 UTC (permalink / raw)
To: Théo Lebrun
Cc: Nicolas Ferre, Claudiu Beznea, Andrew Lunn, David S. Miller,
Eric Dumazet, Jakub Kicinski, Paolo Abeni, Richard Cochran,
Russell King, Paolo Valerio, Conor Dooley, Vladimir Kondratiev,
Gregory CLEMENT, Benoît Monin, Tawfik Bayouk,
Thomas Petazzoni, Maxime Chevallier, netdev, linux-kernel
On 1.4.2026 18:39, Théo Lebrun wrote:
> Whenever an operation requires buffer realloc, we close the interface,
> update parameters and reopen. To improve reliability under memory
> pressure, we should rather alloc new buffers, reconfigure HW and free
> old buffers. This requires MACB to support having multiple "contexts"
> in parallel.
>
> Introduce this concept by adding the macb_context struct, which owns
> all
> queue buffers and the parameters associated. We do not yet support
> multiple contexts in parallel, because all functions access bp->ctx
> (the currently active context) directly.
>
> Steps:
>
> - Introduce `struct macb_context` and its children `struct macb_rxq`
> and `struct macb_txq`. Context fields are stolen from `struct macb`
> and rxq/txq fields are from `struct macb_queue`.
>
> Making it two separate structs per queue simplifies accesses: we
> grab
> a txq/rxq local variable and access fields like txq->head instead of
> queue->tx_head. It also anecdotally improves data locality.
>
> - macb_init_dflt() does not set bp->ctx->{rx,tx}_ring_size to default
> values as ctx is not allocated yet. Instead, introduce
> bp->configured_{rx,tx}_ring_size which get updated on user requests.
>
> - macb_open() starts by allocating bp->ctx. It gets freed in the
> open error codepath or by macb_close().
>
> - Guided by compile errors, update all codepaths. Most diff is
> changing
> `queue->tx_*` to `txq->*` and `queue->rx_*` to `rxq->*`, with a new
> local variable. Also rx_buffer_size / rx_ring_size / tx_ring_size
> move from bp to bp->ctx.
>
> Introduce two helpers macb_tx|rx() functions to convert macb_queue
> pointers.
>
> Signed-off-by: Théo Lebrun <theo.lebrun@bootlin.com>
> ---
> drivers/net/ethernet/cadence/macb.h | 49 ++--
> drivers/net/ethernet/cadence/macb_main.c | 442
> ++++++++++++++++++-------------
> 2 files changed, 296 insertions(+), 195 deletions(-)
>
> [...]
> diff --git a/drivers/net/ethernet/cadence/macb_main.c
> b/drivers/net/ethernet/cadence/macb_main.c
> index d5023fdc0756..0f63d9b89c11 100644
> --- a/drivers/net/ethernet/cadence/macb_main.c
> +++ b/drivers/net/ethernet/cadence/macb_main.c
> [...]
> @@ -3596,14 +3677,15 @@ static void macb_get_regs(struct net_device
> *netdev, struct ethtool_regs *regs,
> void *p)
> {
> struct macb *bp = netdev_priv(netdev);
> + struct macb_txq *txq = &bp->ctx->txq[0];
bp->ctx is NULL when the interface is down. This will crash if
ethtool -d is called while the interface is not running. Same
issue below in macb_get_ringparam().
> unsigned int tail, head;
> u32 *regs_buff = p;
>
> regs->version = (macb_readl(bp, MID) & ((1 << MACB_REV_SIZE) - 1))
> | MACB_GREGS_VERSION;
>
> - tail = macb_tx_ring_wrap(bp, bp->queues[0].tx_tail);
> - head = macb_tx_ring_wrap(bp, bp->queues[0].tx_head);
> + tail = macb_tx_ring_wrap(bp, txq->tail);
> + head = macb_tx_ring_wrap(bp, txq->head);
>
> regs_buff[0] = macb_readl(bp, NCR);
> regs_buff[1] = macb_or_gem_readl(bp, NCFGR);
> @@ -3682,8 +3764,8 @@ static void macb_get_ringparam(struct net_device
> *netdev,
> ring->rx_max_pending = MAX_RX_RING_SIZE;
> ring->tx_max_pending = MAX_TX_RING_SIZE;
>
> - ring->rx_pending = bp->rx_ring_size;
> - ring->tx_pending = bp->tx_ring_size;
> + ring->rx_pending = bp->ctx->rx_ring_size;
> + ring->tx_pending = bp->ctx->tx_ring_size;
Same NULL ctx issue as above. This one could just read from
bp->configured_{rx,tx}_ring_size instead.
> [...]
Thanks
Nicolai
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH net-next 10/11] net: macb: use context swapping in .set_ringparam()
2026-04-01 16:39 ` [PATCH net-next 10/11] net: macb: use context swapping in .set_ringparam() Théo Lebrun
2026-04-01 20:17 ` Maxime Chevallier
@ 2026-04-02 11:29 ` Nicolai Buchwitz
2026-04-02 16:31 ` Théo Lebrun
1 sibling, 1 reply; 24+ messages in thread
From: Nicolai Buchwitz @ 2026-04-02 11:29 UTC (permalink / raw)
To: Théo Lebrun
Cc: Nicolas Ferre, Claudiu Beznea, Andrew Lunn, David S. Miller,
Eric Dumazet, Jakub Kicinski, Paolo Abeni, Richard Cochran,
Russell King, Paolo Valerio, Conor Dooley, Vladimir Kondratiev,
Gregory CLEMENT, Benoît Monin, Tawfik Bayouk,
Thomas Petazzoni, Maxime Chevallier, netdev, linux-kernel
On 1.4.2026 18:39, Théo Lebrun wrote:
> ethtool_ops.set_ringparam() is implemented using the primitive close /
> update ring size / reopen sequence. Under memory pressure this does not
> fly: we free our buffers at close and cannot reallocate new ones at
> open. Also, it triggers a slow PHY reinit.
>
> Instead, exploit the new context mechanism and improve our sequence to:
> - allocate a new context (including buffers) first
> - if it fails, early return without any impact to the interface
> - stop interface
> - update global state (bp, netdev, etc)
> - pass buffer pointers to the hardware
> - start interface
> - free old context.
>
> The HW disable sequence is inspired by macb_reset_hw() but avoids
> (1) setting NCR bit CLRSTAT and (2) clearing register PBUFRXCUT.
>
> The HW re-enable sequence is inspired by macb_mac_link_up(), skipping
> over register writes which would be redundant (because values have not
> changed).
>
> The generic context swapping parts are isolated into helper functions
> macb_context_swap_start|end(), reusable by other operations
> (change_mtu,
> set_channels, etc).
>
> Signed-off-by: Théo Lebrun <theo.lebrun@bootlin.com>
> ---
> drivers/net/ethernet/cadence/macb_main.c | 89
> +++++++++++++++++++++++++++++---
> 1 file changed, 82 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/net/ethernet/cadence/macb_main.c
> b/drivers/net/ethernet/cadence/macb_main.c
> index 42b19b969f3e..543356554c11 100644
> --- a/drivers/net/ethernet/cadence/macb_main.c
> +++ b/drivers/net/ethernet/cadence/macb_main.c
> @@ -2905,6 +2905,76 @@ static struct macb_context
> *macb_context_alloc(struct macb *bp,
> return ctx;
> }
>
> +static void macb_context_swap_start(struct macb *bp)
> +{
> + struct macb_queue *queue;
> + unsigned int q;
> + u32 ctrl;
> +
> + /* Disable software Tx, disable HW Tx/Rx and disable NAPI. */
> +
> + netif_tx_disable(bp->netdev);
> +
> + ctrl = macb_readl(bp, NCR);
> + macb_writel(bp, NCR, ctrl & ~(MACB_BIT(RE) | MACB_BIT(TE)));
> +
> + macb_writel(bp, TSR, -1);
> + macb_writel(bp, RSR, -1);
> +
> + for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) {
> + queue_writel(queue, IDR, -1);
> + queue_readl(queue, ISR);
> + if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
> + queue_writel(queue, ISR, -1);
> + }
> +
> + for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) {
> + napi_disable(&queue->napi_rx);
> + napi_disable(&queue->napi_tx);
> + }
tx_error_task, hresp_err_bh_work, and tx_lpi_work all dereference
bp->ctx and could race with the pointer swap in swap_end.
macb_close() cancels at least tx_lpi_work here. Should these be
flushed too?
> +}
> +
> +static void macb_context_swap_end(struct macb *bp,
> + struct macb_context *new_ctx)
> +{
> + struct macb_context *old_ctx;
> + struct macb_queue *queue;
> + unsigned int q;
> + u32 ctrl;
> +
> + /* Swap contexts & give buffer pointers to HW. */
> +
> + old_ctx = bp->ctx;
> + bp->ctx = new_ctx;
> + macb_init_buffers(bp);
> +
> + /* Start NAPI, HW Tx/Rx and software Tx. */
> +
> + for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) {
> + napi_enable(&queue->napi_rx);
> + napi_enable(&queue->napi_tx);
> + }
> +
> + if (!(bp->caps & MACB_CAPS_MACB_IS_EMAC)) {
> + for (q = 0, queue = bp->queues; q < bp->num_queues;
> + ++q, ++queue) {
> + queue_writel(queue, IER,
> + bp->rx_intr_mask |
> + MACB_TX_INT_FLAGS |
> + MACB_BIT(HRESP));
> + }
> + }
> +
> + ctrl = macb_readl(bp, NCR);
> + macb_writel(bp, NCR, ctrl | MACB_BIT(RE) | MACB_BIT(TE));
> +
> + netif_tx_start_all_queues(bp->netdev);
> +
> + /* Free old context. */
> +
> + macb_free_consistent(old_ctx);
1. kfree(old_ctx) is missing. The context struct itself leaks on
every swap.
2. macb_close() calls netdev_tx_reset_queue() for each queue.
Shouldn't the swap do the same? BQL accounting will be stale
after switching to a fresh context.
3. macb_configure_dma() is not called after the swap. For
set_ringparam this is probably fine since rx_buffer_size
does not change, but this becomes a problem in patch 11.
> [...]
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH net-next 11/11] net: macb: use context swapping in .ndo_change_mtu()
2026-04-01 16:39 ` [PATCH net-next 11/11] net: macb: use context swapping in .ndo_change_mtu() Théo Lebrun
@ 2026-04-02 11:30 ` Nicolai Buchwitz
0 siblings, 0 replies; 24+ messages in thread
From: Nicolai Buchwitz @ 2026-04-02 11:30 UTC (permalink / raw)
To: Théo Lebrun
Cc: Nicolas Ferre, Claudiu Beznea, Andrew Lunn, David S. Miller,
Eric Dumazet, Jakub Kicinski, Paolo Abeni, Richard Cochran,
Russell King, Paolo Valerio, Conor Dooley, Vladimir Kondratiev,
Gregory CLEMENT, Benoît Monin, Tawfik Bayouk,
Thomas Petazzoni, Maxime Chevallier, netdev, linux-kernel
On 1.4.2026 18:39, Théo Lebrun wrote:
> Use newly introduced context buffer management to implement
> .ndo_change_mtu() as a context swap: allocate new context ->
> reconfigure HW -> free old context.
>
> This resists memory pressure well by failing without closing the
> interface and it is much faster by avoiding PHY reinit.
>
> Signed-off-by: Théo Lebrun <theo.lebrun@bootlin.com>
> ---
> drivers/net/ethernet/cadence/macb_main.c | 18 ++++++++++++++++--
> 1 file changed, 16 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/net/ethernet/cadence/macb_main.c
> b/drivers/net/ethernet/cadence/macb_main.c
> index 543356554c11..e10791bf1f4d 100644
> --- a/drivers/net/ethernet/cadence/macb_main.c
> +++ b/drivers/net/ethernet/cadence/macb_main.c
> @@ -3438,11 +3438,25 @@ static int macb_close(struct net_device
> *netdev)
>
> static int macb_change_mtu(struct net_device *netdev, int new_mtu)
> {
> - if (netif_running(netdev))
> - return -EBUSY;
> + struct macb *bp = netdev_priv(netdev);
> + bool running = netif_running(netdev);
> + struct macb_context *new_ctx;
> +
> + if (running) {
> + new_ctx = macb_context_alloc(bp, new_mtu,
> + bp->configured_rx_ring_size,
> + bp->configured_tx_ring_size);
> + if (IS_ERR(new_ctx))
> + return PTR_ERR(new_ctx);
> +
> + macb_context_swap_start(bp);
> + }
>
> WRITE_ONCE(netdev->mtu, new_mtu);
>
> + if (running)
> + macb_context_swap_end(bp, new_ctx);
Same issues from patch 10 apply here, plus:
macb_context_swap_end() never calls macb_configure_dma(). The
new context has a different rx_buffer_size (derived from new_mtu),
but the DMACFG register's RXBS field still reflects the old
value. If new MTU > old MTU, the hardware will DMA past the end
of the new buffers using the old (smaller) RXBS. Shouldn't
macb_configure_dma() be called after the swap?
> +
> return 0;
> }
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH net-next 00/11] net: macb: implement context swapping
2026-04-01 16:39 [PATCH net-next 00/11] net: macb: implement context swapping Théo Lebrun
` (10 preceding siblings ...)
2026-04-01 16:39 ` [PATCH net-next 11/11] net: macb: use context swapping in .ndo_change_mtu() Théo Lebrun
@ 2026-04-02 11:35 ` Nicolai Buchwitz
2026-04-02 13:46 ` Théo Lebrun
11 siblings, 1 reply; 24+ messages in thread
From: Nicolai Buchwitz @ 2026-04-02 11:35 UTC (permalink / raw)
To: Théo Lebrun
Cc: Nicolas Ferre, Claudiu Beznea, Andrew Lunn, David S. Miller,
Eric Dumazet, Jakub Kicinski, Paolo Abeni, Richard Cochran,
Russell King, Paolo Valerio, Conor Dooley, Vladimir Kondratiev,
Gregory CLEMENT, Benoît Monin, Tawfik Bayouk,
Thomas Petazzoni, Maxime Chevallier, netdev, linux-kernel
On 1.4.2026 18:39, Théo Lebrun wrote:
> MACB has a pretty primitive approach to buffer management. They are all
> stored in `struct macb *bp`. On operations that require buffer realloc
> (set_ringparam & change_mtu ATM), the only option is to close the
> interface, change our global state and re-open the interface.
>
> Two issues:
> - It doesn't fly on memory pressured systems; we free our precious
> buffers and don't manage to reallocate fully, meaning our machine
> just lost its network access.
> - Anecdotally, it is pretty slow because it implies a full PHY reinit.
>
> Instead, we shall:
> - allocate a new context (including buffers) first
> - if it fails, early return without any impact to the interface
> - stop interface
> - update global state (bp, netdev, etc)
> - pass newly allocated buffer pointers to the hardware
> - start interface
> - free old context
>
> This is what we implement here. Both .set_ringparam() and
> .ndo_change_mtu() are covered by this series. In the future,
> at least .set_channels() [0], XDP [1] and XSK [2] would benefit.
Thanks for your work, the context swapping approach probably
makes a lot of sense and will finally bring proper MTU change
support that I tried to patch earlier.
>
> The change is super intrusive so conflicts will be major. Sorry!
>
> Thanks,
> Have a nice day,
> Théo
>
> [0]:
> https://lore.kernel.org/netdev/20260317-macb-set-channels-v4-0-1bd4f4ffcfca@bootlin.com/
> [1]:
> https://lore.kernel.org/netdev/20260323221047.2749577-1-pvalerio@redhat.com/
> [2]:
> https://lore.kernel.org/netdev/20260304-macb-xsk-v1-0-ba2ebe2bdaa3@bootlin.com/
>
> Signed-off-by: Théo Lebrun <theo.lebrun@bootlin.com>
> ---
> Théo Lebrun (11):
> net: macb: unify device pointer naming convention
> net: macb: unify `struct macb *` naming convention
> net: macb: unify queue index variable naming convention and types
> net: macb: enforce reverse christmas tree (RCT) convention
> net: macb: allocate tieoff descriptor once across device lifetime
> net: macb: introduce macb_context struct for buffer management
> net: macb: avoid macb_init_rx_buffer_size() modifying state
> net: macb: make `struct macb` subset reachable from macb_context
> struct
> net: macb: introduce macb_context_alloc() helper
> net: macb: use context swapping in .set_ringparam()
> net: macb: use context swapping in .ndo_change_mtu()
>
> drivers/net/ethernet/cadence/macb.h | 119 +-
> drivers/net/ethernet/cadence/macb_main.c | 1731
> +++++++++++++++++-------------
> drivers/net/ethernet/cadence/macb_pci.c | 46 +-
> drivers/net/ethernet/cadence/macb_ptp.c | 26 +-
> 4 files changed, 1090 insertions(+), 832 deletions(-)
> ---
> base-commit: 321d1ee521de1362c22adadbc0ce066050a17783
The series didn't apply cleanly on current net-next. The
base commit 321d1ee521de doesn't seem to be upstream yet, is
this based on your set_channels v4 series?
> change-id: 20260401-macb-context-bd0caf20414d
>
> Best regards,
> --
> Théo Lebrun <theo.lebrun@bootlin.com>
Thanks
Nicolai
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH net-next 00/11] net: macb: implement context swapping
2026-04-02 11:35 ` [PATCH net-next 00/11] net: macb: implement context swapping Nicolai Buchwitz
@ 2026-04-02 13:46 ` Théo Lebrun
0 siblings, 0 replies; 24+ messages in thread
From: Théo Lebrun @ 2026-04-02 13:46 UTC (permalink / raw)
To: Nicolai Buchwitz, Théo Lebrun
Cc: Nicolas Ferre, Claudiu Beznea, Andrew Lunn, David S. Miller,
Eric Dumazet, Jakub Kicinski, Paolo Abeni, Richard Cochran,
Russell King, Paolo Valerio, Conor Dooley, Vladimir Kondratiev,
Gregory CLEMENT, Benoît Monin, Tawfik Bayouk,
Thomas Petazzoni, Maxime Chevallier, netdev, linux-kernel
Hello Nicolai,
On Thu Apr 2, 2026 at 1:35 PM CEST, Nicolai Buchwitz wrote:
> On 1.4.2026 18:39, Théo Lebrun wrote:
>> MACB has a pretty primitive approach to buffer management. They are all
>> stored in `struct macb *bp`. On operations that require buffer realloc
>> (set_ringparam & change_mtu ATM), the only option is to close the
>> interface, change our global state and re-open the interface.
>>
>> Two issues:
>> - It doesn't fly on memory pressured systems; we free our precious
>> buffers and don't manage to reallocate fully, meaning our machine
>> just lost its network access.
>> - Anecdotally, it is pretty slow because it implies a full PHY reinit.
>>
>> Instead, we shall:
>> - allocate a new context (including buffers) first
>> - if it fails, early return without any impact to the interface
>> - stop interface
>> - update global state (bp, netdev, etc)
>> - pass newly allocated buffer pointers to the hardware
>> - start interface
>> - free old context
>>
>> This is what we implement here. Both .set_ringparam() and
>> .ndo_change_mtu() are covered by this series. In the future,
>> at least .set_channels() [0], XDP [1] and XSK [2] would benefit.
>
> Thanks for your work, the context swapping approach probably
> makes a lot of sense and will finally bring proper MTU change
> support that I tried to patch earlier.
Thanks for the review!
>> The change is super intrusive so conflicts will be major. Sorry!
>>
>> Thanks,
>> Have a nice day,
>> Théo
>>
>> [0]:
>> https://lore.kernel.org/netdev/20260317-macb-set-channels-v4-0-1bd4f4ffcfca@bootlin.com/
>> [1]:
>> https://lore.kernel.org/netdev/20260323221047.2749577-1-pvalerio@redhat.com/
>> [2]:
>> https://lore.kernel.org/netdev/20260304-macb-xsk-v1-0-ba2ebe2bdaa3@bootlin.com/
>>
>> Signed-off-by: Théo Lebrun <theo.lebrun@bootlin.com>
>> ---
>> Théo Lebrun (11):
>> net: macb: unify device pointer naming convention
>> net: macb: unify `struct macb *` naming convention
>> net: macb: unify queue index variable naming convention and types
>> net: macb: enforce reverse christmas tree (RCT) convention
>> net: macb: allocate tieoff descriptor once across device lifetime
>> net: macb: introduce macb_context struct for buffer management
>> net: macb: avoid macb_init_rx_buffer_size() modifying state
>> net: macb: make `struct macb` subset reachable from macb_context
>> struct
>> net: macb: introduce macb_context_alloc() helper
>> net: macb: use context swapping in .set_ringparam()
>> net: macb: use context swapping in .ndo_change_mtu()
>>
>> drivers/net/ethernet/cadence/macb.h | 119 +-
>> drivers/net/ethernet/cadence/macb_main.c | 1731
>> +++++++++++++++++-------------
>> drivers/net/ethernet/cadence/macb_pci.c | 46 +-
>> drivers/net/ethernet/cadence/macb_ptp.c | 26 +-
>> 4 files changed, 1090 insertions(+), 832 deletions(-)
>> ---
>> base-commit: 321d1ee521de1362c22adadbc0ce066050a17783
>
> The series didn't apply cleanly on current net-next. The
> base commit 321d1ee521de doesn't seem to be upstream yet, is
> this based on your set_channels v4 series?
Surprising. I fetched net-next/main yesterday morning. My branch is:
- net-next/main @ f1359c240191
- my three series needed for working networking on MACB [0][1][2]
- some dev defconfigs
- finally the series sent upstream (b4 cover letter then series)
I confirm it applies on f1359c240191 but not on today's
net-next/main @ 269389ba5398.
I should experiment with b4 series dependency management. IIUC it would
have exposed through public metadata that my parent commit was
f1359c240191 even if it isn't strictly true locally.
We happen to add macb_{alloc,free}_tieoff() just above
at91_default_usrio which got edited by Conor in cee10a01e286 ("net:
macb: fix use of at91_default_usrio without CONFIG_OF"). Will fix in V2.
Thanks,
[0]: https://lore.kernel.org/all/20260225-macb-phy-v7-0-665bd8619d51@bootlin.com/
[1]: https://lore.kernel.org/all/20260225-macb-phy-v7-0-d3c9842ec931@bootlin.com/
[2]: https://lore.kernel.org/linux-phy/20260309-macb-phy-v9-0-5afd87d9db43@bootlin.com/
--
Théo Lebrun, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH net-next 05/11] net: macb: allocate tieoff descriptor once across device lifetime
2026-04-02 11:14 ` Nicolai Buchwitz
@ 2026-04-02 13:57 ` Théo Lebrun
0 siblings, 0 replies; 24+ messages in thread
From: Théo Lebrun @ 2026-04-02 13:57 UTC (permalink / raw)
To: Nicolai Buchwitz, Théo Lebrun
Cc: Nicolas Ferre, Claudiu Beznea, Andrew Lunn, David S. Miller,
Eric Dumazet, Jakub Kicinski, Paolo Abeni, Richard Cochran,
Russell King, Paolo Valerio, Conor Dooley, Vladimir Kondratiev,
Gregory CLEMENT, Benoît Monin, Tawfik Bayouk,
Thomas Petazzoni, Maxime Chevallier, netdev, linux-kernel
On Thu Apr 2, 2026 at 1:14 PM CEST, Nicolai Buchwitz wrote:
> On 1.4.2026 18:39, Théo Lebrun wrote:
>> The tieoff descriptor is a RX DMA descriptor ring of size one. It gets
>> configured onto queues for Wake-on-LAN during system-wide suspend when
>> hardware does not support disabling individual queues
>> (MACB_CAPS_QUEUE_DISABLE).
>>
>> MACB/GEM driver allocates it alongside the main RX ring
>> inside macb_alloc_consistent() at open. Free is done by
>> macb_free_consistent() at close.
>>
>> Change to allocate once at probe and free on probe failure or device
>> removal. This makes the tieoff descriptor lifetime much longer,
>> avoiding repeating coherent buffer allocation on each open/close cycle.
>>
>> Main benefit: we dissociate its lifetime from the main ring's lifetime.
>> That way there is less work to be doing on resources (re)alloc. This
>> currently happens on close/open, but will soon also happen on context
>> swap operations (set_ringparam, change_mtu, set_channels, etc).
>>
>> Signed-off-by: Théo Lebrun <theo.lebrun@bootlin.com>
>> ---
>> drivers/net/ethernet/cadence/macb_main.c | 70
>> ++++++++++++++++----------------
>
>> [...]
>
>>
>> +static int macb_alloc_tieoff(struct macb *bp)
>> +{
>> + /* Tieoff is a workaround in case HW cannot disable queues, for PM.
>> */
>> + if (bp->caps & MACB_CAPS_QUEUE_DISABLE)
>> + return 0;
>> +
>> + bp->rx_ring_tieoff = dma_alloc_coherent(&bp->pdev->dev,
>> + macb_dma_desc_get_size(bp),
>> + &bp->rx_ring_tieoff_dma,
>> + GFP_KERNEL);
>> + if (!bp->rx_ring_tieoff)
>> + return -ENOMEM;
>> +
>> + return 0;
>> +}
>
> The old macb_init_tieoff() that wrote WRAP+USED into the
> descriptor is deleted but its work is not replicated here.
> dma_alloc_coherent zeroes the memory, so RX_USED=0 and the
> hardware will treat it as a valid receive buffer pointing to
> DMA address 0 during suspend.
>
> Shouldn't this have a macb_set_addr() + ctrl=0 after the
> allocation?
Clearly! This V1 uses tieoff uninitialised. The two instructions from
old macb_init_tieoff() have been appended to macb_alloc_tieoff().
static int macb_alloc_tieoff(struct macb *bp)
{
/* Tieoff is a workaround in case HW cannot disable queues, for PM. */
if (bp->caps & MACB_CAPS_QUEUE_DISABLE)
return 0;
bp->rx_ring_tieoff = dma_alloc_coherent(&bp->pdev->dev,
macb_dma_desc_get_size(bp),
&bp->rx_ring_tieoff_dma,
GFP_KERNEL);
if (!bp->rx_ring_tieoff)
return -ENOMEM;
macb_set_addr(bp, bp->rx_ring_tieoff,
MACB_BIT(RX_WRAP) | MACB_BIT(RX_USED));
bp->rx_ring_tieoff->ctrl = 0;
return 0;
}
Thanks,
--
Théo Lebrun, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH net-next 06/11] net: macb: introduce macb_context struct for buffer management
2026-04-02 11:22 ` Nicolai Buchwitz
@ 2026-04-02 14:11 ` Théo Lebrun
0 siblings, 0 replies; 24+ messages in thread
From: Théo Lebrun @ 2026-04-02 14:11 UTC (permalink / raw)
To: Nicolai Buchwitz, Théo Lebrun
Cc: Nicolas Ferre, Claudiu Beznea, Andrew Lunn, David S. Miller,
Eric Dumazet, Jakub Kicinski, Paolo Abeni, Richard Cochran,
Russell King, Paolo Valerio, Conor Dooley, Vladimir Kondratiev,
Gregory CLEMENT, Benoît Monin, Tawfik Bayouk,
Thomas Petazzoni, Maxime Chevallier, netdev, linux-kernel
On Thu Apr 2, 2026 at 1:22 PM CEST, Nicolai Buchwitz wrote:
> On 1.4.2026 18:39, Théo Lebrun wrote:
>> Whenever an operation requires buffer realloc, we close the interface,
>> update parameters and reopen. To improve reliability under memory
>> pressure, we should rather alloc new buffers, reconfigure HW and free
>> old buffers. This requires MACB to support having multiple "contexts"
>> in parallel.
>>
>> Introduce this concept by adding the macb_context struct, which owns
>> all
>> queue buffers and the parameters associated. We do not yet support
>> multiple contexts in parallel, because all functions access bp->ctx
>> (the currently active context) directly.
>>
>> Steps:
>>
>> - Introduce `struct macb_context` and its children `struct macb_rxq`
>> and `struct macb_txq`. Context fields are stolen from `struct macb`
>> and rxq/txq fields are from `struct macb_queue`.
>>
>> Making it two separate structs per queue simplifies accesses: we
>> grab
>> a txq/rxq local variable and access fields like txq->head instead of
>> queue->tx_head. It also anecdotally improves data locality.
>>
>> - macb_init_dflt() does not set bp->ctx->{rx,tx}_ring_size to default
>> values as ctx is not allocated yet. Instead, introduce
>> bp->configured_{rx,tx}_ring_size which get updated on user requests.
>>
>> - macb_open() starts by allocating bp->ctx. It gets freed in the
>> open error codepath or by macb_close().
>>
>> - Guided by compile errors, update all codepaths. Most diff is
>> changing
>> `queue->tx_*` to `txq->*` and `queue->rx_*` to `rxq->*`, with a new
>> local variable. Also rx_buffer_size / rx_ring_size / tx_ring_size
>> move from bp to bp->ctx.
>>
>> Introduce two helpers macb_tx|rx() functions to convert macb_queue
>> pointers.
>>
>> Signed-off-by: Théo Lebrun <theo.lebrun@bootlin.com>
>> ---
>> drivers/net/ethernet/cadence/macb.h | 49 ++--
>> drivers/net/ethernet/cadence/macb_main.c | 442
>> ++++++++++++++++++-------------
>> 2 files changed, 296 insertions(+), 195 deletions(-)
>>
>
>> [...]
>
>> diff --git a/drivers/net/ethernet/cadence/macb_main.c
>> b/drivers/net/ethernet/cadence/macb_main.c
>> index d5023fdc0756..0f63d9b89c11 100644
>> --- a/drivers/net/ethernet/cadence/macb_main.c
>> +++ b/drivers/net/ethernet/cadence/macb_main.c
>
>> [...]
>
>> @@ -3596,14 +3677,15 @@ static void macb_get_regs(struct net_device
>> *netdev, struct ethtool_regs *regs,
>> void *p)
>> {
>> struct macb *bp = netdev_priv(netdev);
>> + struct macb_txq *txq = &bp->ctx->txq[0];
>
> bp->ctx is NULL when the interface is down. This will crash if
> ethtool -d is called while the interface is not running. Same
> issue below in macb_get_ringparam().
Agreed. Will check if context is alive and use default values otherwise.
Something like this for V2:
static void macb_get_regs(struct net_device *netdev, struct ethtool_regs *regs,
void *p)
{
dma_addr_t tx_dma_tail = 0, tx_dma_head = 0;
struct macb *bp = netdev_priv(netdev);
unsigned int tail = 0, head = 0;
struct macb_txq *txq;
u32 *regs_buff = p;
regs->version = (macb_readl(bp, MID) & ((1 << MACB_REV_SIZE) - 1))
| MACB_GREGS_VERSION;
if (bp->ctx) {
txq = &bp->ctx->txq[0];
tail = macb_tx_ring_wrap(bp->ctx, txq->tail);
head = macb_tx_ring_wrap(bp->ctx, txq->head);
tx_dma_tail = macb_tx_dma(&bp->queues[0], tail);
tx_dma_head = macb_tx_dma(&bp->queues[0], head);
}
regs_buff[0] = macb_readl(bp, NCR);
regs_buff[1] = macb_or_gem_readl(bp, NCFGR);
regs_buff[2] = macb_readl(bp, NSR);
regs_buff[3] = macb_readl(bp, TSR);
regs_buff[4] = macb_readl(bp, RBQP);
regs_buff[5] = macb_readl(bp, TBQP);
regs_buff[6] = macb_readl(bp, RSR);
regs_buff[7] = macb_readl(bp, IMR);
regs_buff[8] = tail;
regs_buff[9] = head;
regs_buff[10] = tx_dma_tail;
regs_buff[11] = tx_dma_head;
if (!(bp->caps & MACB_CAPS_USRIO_DISABLED))
regs_buff[12] = macb_or_gem_readl(bp, USRIO);
if (macb_is_gem(bp->caps))
regs_buff[13] = gem_readl(bp, DMACFG);
}
>
>> unsigned int tail, head;
>> u32 *regs_buff = p;
>>
>> regs->version = (macb_readl(bp, MID) & ((1 << MACB_REV_SIZE) - 1))
>> | MACB_GREGS_VERSION;
>>
>> - tail = macb_tx_ring_wrap(bp, bp->queues[0].tx_tail);
>> - head = macb_tx_ring_wrap(bp, bp->queues[0].tx_head);
>> + tail = macb_tx_ring_wrap(bp, txq->tail);
>> + head = macb_tx_ring_wrap(bp, txq->head);
>>
>> regs_buff[0] = macb_readl(bp, NCR);
>> regs_buff[1] = macb_or_gem_readl(bp, NCFGR);
>> @@ -3682,8 +3764,8 @@ static void macb_get_ringparam(struct net_device
>> *netdev,
>> ring->rx_max_pending = MAX_RX_RING_SIZE;
>> ring->tx_max_pending = MAX_TX_RING_SIZE;
>>
>> - ring->rx_pending = bp->rx_ring_size;
>> - ring->tx_pending = bp->tx_ring_size;
>> + ring->rx_pending = bp->ctx->rx_ring_size;
>> + ring->tx_pending = bp->ctx->tx_ring_size;
>
> Same NULL ctx issue as above. This one could just read from
> bp->configured_{rx,tx}_ring_size instead.
Agreed with the fix, easy one.
Thanks!
--
Théo Lebrun, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH net-next 10/11] net: macb: use context swapping in .set_ringparam()
2026-04-02 11:29 ` Nicolai Buchwitz
@ 2026-04-02 16:31 ` Théo Lebrun
2026-04-03 9:03 ` Théo Lebrun
0 siblings, 1 reply; 24+ messages in thread
From: Théo Lebrun @ 2026-04-02 16:31 UTC (permalink / raw)
To: Nicolai Buchwitz, Théo Lebrun
Cc: Nicolas Ferre, Claudiu Beznea, Andrew Lunn, David S. Miller,
Eric Dumazet, Jakub Kicinski, Paolo Abeni, Richard Cochran,
Russell King, Paolo Valerio, Conor Dooley, Vladimir Kondratiev,
Gregory CLEMENT, Benoît Monin, Tawfik Bayouk,
Thomas Petazzoni, Maxime Chevallier, netdev, linux-kernel
On Thu Apr 2, 2026 at 1:29 PM CEST, Nicolai Buchwitz wrote:
> On 1.4.2026 18:39, Théo Lebrun wrote:
>> ethtool_ops.set_ringparam() is implemented using the primitive close /
>> update ring size / reopen sequence. Under memory pressure this does not
>> fly: we free our buffers at close and cannot reallocate new ones at
>> open. Also, it triggers a slow PHY reinit.
>>
>> Instead, exploit the new context mechanism and improve our sequence to:
>> - allocate a new context (including buffers) first
>> - if it fails, early return without any impact to the interface
>> - stop interface
>> - update global state (bp, netdev, etc)
>> - pass buffer pointers to the hardware
>> - start interface
>> - free old context.
>>
>> The HW disable sequence is inspired by macb_reset_hw() but avoids
>> (1) setting NCR bit CLRSTAT and (2) clearing register PBUFRXCUT.
>>
>> The HW re-enable sequence is inspired by macb_mac_link_up(), skipping
>> over register writes which would be redundant (because values have not
>> changed).
>>
>> The generic context swapping parts are isolated into helper functions
>> macb_context_swap_start|end(), reusable by other operations
>> (change_mtu,
>> set_channels, etc).
>>
>> Signed-off-by: Théo Lebrun <theo.lebrun@bootlin.com>
>> ---
>> drivers/net/ethernet/cadence/macb_main.c | 89
>> +++++++++++++++++++++++++++++---
>> 1 file changed, 82 insertions(+), 7 deletions(-)
>>
>> diff --git a/drivers/net/ethernet/cadence/macb_main.c
>> b/drivers/net/ethernet/cadence/macb_main.c
>> index 42b19b969f3e..543356554c11 100644
>> --- a/drivers/net/ethernet/cadence/macb_main.c
>> +++ b/drivers/net/ethernet/cadence/macb_main.c
>> @@ -2905,6 +2905,76 @@ static struct macb_context
>> *macb_context_alloc(struct macb *bp,
>> return ctx;
>> }
>>
>> +static void macb_context_swap_start(struct macb *bp)
>> +{
>> + struct macb_queue *queue;
>> + unsigned int q;
>> + u32 ctrl;
>> +
>> + /* Disable software Tx, disable HW Tx/Rx and disable NAPI. */
>> +
>> + netif_tx_disable(bp->netdev);
>> +
>> + ctrl = macb_readl(bp, NCR);
>> + macb_writel(bp, NCR, ctrl & ~(MACB_BIT(RE) | MACB_BIT(TE)));
>> +
>> + macb_writel(bp, TSR, -1);
>> + macb_writel(bp, RSR, -1);
>> +
>> + for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) {
>> + queue_writel(queue, IDR, -1);
>> + queue_readl(queue, ISR);
>> + if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
>> + queue_writel(queue, ISR, -1);
>> + }
>> +
>> + for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) {
>> + napi_disable(&queue->napi_rx);
>> + napi_disable(&queue->napi_tx);
>> + }
>
> tx_error_task, hresp_err_bh_work, and tx_lpi_work all dereference
> bp->ctx and could race with the pointer swap in swap_end.
> macb_close() cancels at least tx_lpi_work here. Should these be
> flushed too?
This is a large topic! While trying to find a solution as part of this
series I am noticing many race conditions. With this context series we
worsen some (by introducing bp->ctx NULL ptr dereference).
Let's start by identifying all schedule-able contexts involved:
- #1 any request from userspace, too many callbacks to list
- #2 NAPI softirq or kthread context, macb_{rx,tx}_poll()
- #3 bp->hresp_err_bh_work / macb_hresp_error_task()
- #4 bp->tx_lpi_work / macb_tx_lpi_work_fn()
- #5 queue->tx_error_task / macb_tx_error_task()
- #6 IRQ context, macb_interrupt()
Some race conditions:
- #1 macb_close() doesn't cancel & wait upon #3 hresp_err_bh_work.
They could race, especially as #3 doesn't grab bp->lock. One race
example: #3 BP HRESP starts the interface after it has been closed
and buffers freed. RBQP/TBQP are not reset so MACB would occur
memory corruption on Rx and transmit memory content.
- #1 macb_close() doesn't cancel & wait upon #5 tx_error_task. #5 does
grab bp->lock but that doesn't make it much safer. One race example:
same as above, restart of interface with ghost ring buffers.
- #3 hresp_err_bh_work could collide with anything as it does no
locking, especially #1 (xmit for example) or #2 (NAPI). It is less
likely to collide with #6 IRQ because it starts by disabling those
but there is a possibility of the IRQ having already triggered and
macb_interrupt() already running in parallel of
macb_hresp_error_task().
- #5 queue->tx_error_task writes to Tx head/tail inside bp->lock.
#1 macb_start_xmit() modifies those too, but inside
queue->tx_ptr_lock. Oops. There probably are other places modifying
head/tail or any other Tx queue value without queue->tx_ptr_lock.
- #5 macb_tx_error_task() tries to gently disable TX but if it
times-out then it uses the global switch (TE field in NCR
register). That sounds racy with #2 NAPI that doesn't grab bp->lock
and would probably break if the interface is shutdown under its
feet.
I don't see much more. To fix all that, someone ought to exhaustively go
through all tasks (#1-6 above) & all shared data and reason one by one.
Who will be that someone? ;-) But that sounds pretty unrelated to the
series at hand, no?
I'd agree that some locking of bp->lock around the swap operation would
improve the series, and I'll add that in V2 for sure!
>
>> +}
>> +
>> +static void macb_context_swap_end(struct macb *bp,
>> + struct macb_context *new_ctx)
>> +{
>> + struct macb_context *old_ctx;
>> + struct macb_queue *queue;
>> + unsigned int q;
>> + u32 ctrl;
>> +
>> + /* Swap contexts & give buffer pointers to HW. */
>> +
>> + old_ctx = bp->ctx;
>> + bp->ctx = new_ctx;
>> + macb_init_buffers(bp);
>> +
>> + /* Start NAPI, HW Tx/Rx and software Tx. */
>> +
>> + for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) {
>> + napi_enable(&queue->napi_rx);
>> + napi_enable(&queue->napi_tx);
>> + }
>> +
>> + if (!(bp->caps & MACB_CAPS_MACB_IS_EMAC)) {
>> + for (q = 0, queue = bp->queues; q < bp->num_queues;
>> + ++q, ++queue) {
>> + queue_writel(queue, IER,
>> + bp->rx_intr_mask |
>> + MACB_TX_INT_FLAGS |
>> + MACB_BIT(HRESP));
>> + }
>> + }
>> +
>> + ctrl = macb_readl(bp, NCR);
>> + macb_writel(bp, NCR, ctrl | MACB_BIT(RE) | MACB_BIT(TE));
>> +
>> + netif_tx_start_all_queues(bp->netdev);
>> +
>> + /* Free old context. */
>> +
>> + macb_free_consistent(old_ctx);
>
> 1. kfree(old_ctx) is missing. The context struct itself leaks on
> every swap.
Agreed.
> 2. macb_close() calls netdev_tx_reset_queue() for each queue.
> Shouldn't the swap do the same? BQL accounting will be stale
> after switching to a fresh context.
I explicitely left that out as I thought DQL would benefit from keeping
past context of the traffic. But indeed as we start afresh from a new
set of buffers we should reset DQL. fbnic, pointed out as an good
example by Jakub recently, does that.
>
> 3. macb_configure_dma() is not called after the swap. For
> set_ringparam this is probably fine since rx_buffer_size
> does not change, but this becomes a problem in patch 11.
Indeed, I had missed it took bp->ctx->rx_buffer_size as a parameter.
Will fix.
Thanks,
--
Théo Lebrun, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH net-next 10/11] net: macb: use context swapping in .set_ringparam()
2026-04-01 20:17 ` Maxime Chevallier
@ 2026-04-02 16:34 ` Théo Lebrun
0 siblings, 0 replies; 24+ messages in thread
From: Théo Lebrun @ 2026-04-02 16:34 UTC (permalink / raw)
To: Maxime Chevallier, Théo Lebrun, Nicolas Ferre,
Claudiu Beznea, Andrew Lunn, David S. Miller, Eric Dumazet,
Jakub Kicinski, Paolo Abeni, Richard Cochran, Russell King
Cc: Paolo Valerio, Conor Dooley, Nicolai Buchwitz,
Vladimir Kondratiev, Gregory CLEMENT, Benoît Monin,
Tawfik Bayouk, Thomas Petazzoni, netdev, linux-kernel
On Wed Apr 1, 2026 at 10:17 PM CEST, Maxime Chevallier wrote:
> On 01/04/2026 18:39, Théo Lebrun wrote:
>> ethtool_ops.set_ringparam() is implemented using the primitive close /
>> update ring size / reopen sequence. Under memory pressure this does not
>> fly: we free our buffers at close and cannot reallocate new ones at
>> open. Also, it triggers a slow PHY reinit.
>>
>> Instead, exploit the new context mechanism and improve our sequence to:
>> - allocate a new context (including buffers) first
>> - if it fails, early return without any impact to the interface
>> - stop interface
>> - update global state (bp, netdev, etc)
>> - pass buffer pointers to the hardware
>> - start interface
>> - free old context.
>>
>> The HW disable sequence is inspired by macb_reset_hw() but avoids
>> (1) setting NCR bit CLRSTAT and (2) clearing register PBUFRXCUT.
>>
>> The HW re-enable sequence is inspired by macb_mac_link_up(), skipping
>> over register writes which would be redundant (because values have not
>> changed).
>>
>> The generic context swapping parts are isolated into helper functions
>> macb_context_swap_start|end(), reusable by other operations (change_mtu,
>> set_channels, etc).
>>
>> Signed-off-by: Théo Lebrun <theo.lebrun@bootlin.com>
>> ---
>> drivers/net/ethernet/cadence/macb_main.c | 89 +++++++++++++++++++++++++++++---
>> 1 file changed, 82 insertions(+), 7 deletions(-)
>>
>> diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
>> index 42b19b969f3e..543356554c11 100644
>> --- a/drivers/net/ethernet/cadence/macb_main.c
>> +++ b/drivers/net/ethernet/cadence/macb_main.c
>> @@ -2905,6 +2905,76 @@ static struct macb_context *macb_context_alloc(struct macb *bp,
>> return ctx;
>> }
>>
>> +static void macb_context_swap_start(struct macb *bp)
>> +{
>> + struct macb_queue *queue;
>> + unsigned int q;
>> + u32 ctrl;
>> +
>> + /* Disable software Tx, disable HW Tx/Rx and disable NAPI. */
>> +
>> + netif_tx_disable(bp->netdev);
>> +
>> + ctrl = macb_readl(bp, NCR);
>> + macb_writel(bp, NCR, ctrl & ~(MACB_BIT(RE) | MACB_BIT(TE)));
>> +
>> + macb_writel(bp, TSR, -1);
>> + macb_writel(bp, RSR, -1);
>> +
>> + for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) {
>> + queue_writel(queue, IDR, -1);
>> + queue_readl(queue, ISR);
>> + if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
>> + queue_writel(queue, ISR, -1);
>> + }
>
> These registers appear to be protected by bp->lock, any chance that this
> may race with an interrupt in the middle of them being configured here ?
The topic is complex! I dug deep this afternoon and replied to the
neighbour thread by Nicolai. It might be of interest to you.
https://lore.kernel.org/netdev/90f843aa3940bdbabadddce27314c1f1@tipi-net.de/
(will appear as a child to this email, it hasn't been indexed yet)
Thanks Maxime,
--
Théo Lebrun, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH net-next 10/11] net: macb: use context swapping in .set_ringparam()
2026-04-02 16:31 ` Théo Lebrun
@ 2026-04-03 9:03 ` Théo Lebrun
0 siblings, 0 replies; 24+ messages in thread
From: Théo Lebrun @ 2026-04-03 9:03 UTC (permalink / raw)
To: Théo Lebrun, Nicolai Buchwitz
Cc: Nicolas Ferre, Claudiu Beznea, Andrew Lunn, David S. Miller,
Eric Dumazet, Jakub Kicinski, Paolo Abeni, Richard Cochran,
Russell King, Paolo Valerio, Conor Dooley, Vladimir Kondratiev,
Gregory CLEMENT, Benoît Monin, Tawfik Bayouk,
Thomas Petazzoni, Maxime Chevallier, netdev, linux-kernel
On Thu Apr 2, 2026 at 6:31 PM CEST, Théo Lebrun wrote:
> On Thu Apr 2, 2026 at 1:29 PM CEST, Nicolai Buchwitz wrote:
>> On 1.4.2026 18:39, Théo Lebrun wrote:
>>> ethtool_ops.set_ringparam() is implemented using the primitive close /
>>> update ring size / reopen sequence. Under memory pressure this does not
>>> fly: we free our buffers at close and cannot reallocate new ones at
>>> open. Also, it triggers a slow PHY reinit.
>>>
>>> Instead, exploit the new context mechanism and improve our sequence to:
>>> - allocate a new context (including buffers) first
>>> - if it fails, early return without any impact to the interface
>>> - stop interface
>>> - update global state (bp, netdev, etc)
>>> - pass buffer pointers to the hardware
>>> - start interface
>>> - free old context.
>>>
>>> The HW disable sequence is inspired by macb_reset_hw() but avoids
>>> (1) setting NCR bit CLRSTAT and (2) clearing register PBUFRXCUT.
>>>
>>> The HW re-enable sequence is inspired by macb_mac_link_up(), skipping
>>> over register writes which would be redundant (because values have not
>>> changed).
>>>
>>> The generic context swapping parts are isolated into helper functions
>>> macb_context_swap_start|end(), reusable by other operations
>>> (change_mtu,
>>> set_channels, etc).
>>>
>>> Signed-off-by: Théo Lebrun <theo.lebrun@bootlin.com>
>>> ---
>>> drivers/net/ethernet/cadence/macb_main.c | 89
>>> +++++++++++++++++++++++++++++---
>>> 1 file changed, 82 insertions(+), 7 deletions(-)
>>>
>>> diff --git a/drivers/net/ethernet/cadence/macb_main.c
>>> b/drivers/net/ethernet/cadence/macb_main.c
>>> index 42b19b969f3e..543356554c11 100644
>>> --- a/drivers/net/ethernet/cadence/macb_main.c
>>> +++ b/drivers/net/ethernet/cadence/macb_main.c
>>> @@ -2905,6 +2905,76 @@ static struct macb_context
>>> *macb_context_alloc(struct macb *bp,
>>> return ctx;
>>> }
>>>
>>> +static void macb_context_swap_start(struct macb *bp)
>>> +{
>>> + struct macb_queue *queue;
>>> + unsigned int q;
>>> + u32 ctrl;
>>> +
>>> + /* Disable software Tx, disable HW Tx/Rx and disable NAPI. */
>>> +
>>> + netif_tx_disable(bp->netdev);
>>> +
>>> + ctrl = macb_readl(bp, NCR);
>>> + macb_writel(bp, NCR, ctrl & ~(MACB_BIT(RE) | MACB_BIT(TE)));
>>> +
>>> + macb_writel(bp, TSR, -1);
>>> + macb_writel(bp, RSR, -1);
>>> +
>>> + for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) {
>>> + queue_writel(queue, IDR, -1);
>>> + queue_readl(queue, ISR);
>>> + if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
>>> + queue_writel(queue, ISR, -1);
>>> + }
>>> +
>>> + for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) {
>>> + napi_disable(&queue->napi_rx);
>>> + napi_disable(&queue->napi_tx);
>>> + }
>>
>> tx_error_task, hresp_err_bh_work, and tx_lpi_work all dereference
>> bp->ctx and could race with the pointer swap in swap_end.
>> macb_close() cancels at least tx_lpi_work here. Should these be
>> flushed too?
>
> This is a large topic! While trying to find a solution as part of this
> series I am noticing many race conditions. With this context series we
> worsen some (by introducing bp->ctx NULL ptr dereference).
>
> Let's start by identifying all schedule-able contexts involved:
> - #1 any request from userspace, too many callbacks to list
> - #2 NAPI softirq or kthread context, macb_{rx,tx}_poll()
> - #3 bp->hresp_err_bh_work / macb_hresp_error_task()
> - #4 bp->tx_lpi_work / macb_tx_lpi_work_fn()
> - #5 queue->tx_error_task / macb_tx_error_task()
> - #6 IRQ context, macb_interrupt()
>
> Some race conditions:
>
> - #1 macb_close() doesn't cancel & wait upon #3 hresp_err_bh_work.
> They could race, especially as #3 doesn't grab bp->lock. One race
> example: #3 BP HRESP starts the interface after it has been closed
> and buffers freed. RBQP/TBQP are not reset so MACB would occur
> memory corruption on Rx and transmit memory content.
>
> - #1 macb_close() doesn't cancel & wait upon #5 tx_error_task. #5 does
> grab bp->lock but that doesn't make it much safer. One race example:
> same as above, restart of interface with ghost ring buffers.
>
> - #3 hresp_err_bh_work could collide with anything as it does no
> locking, especially #1 (xmit for example) or #2 (NAPI). It is less
> likely to collide with #6 IRQ because it starts by disabling those
> but there is a possibility of the IRQ having already triggered and
> macb_interrupt() already running in parallel of
> macb_hresp_error_task().
>
> - #5 queue->tx_error_task writes to Tx head/tail inside bp->lock.
> #1 macb_start_xmit() modifies those too, but inside
> queue->tx_ptr_lock. Oops. There probably are other places modifying
> head/tail or any other Tx queue value without queue->tx_ptr_lock.
>
> - #5 macb_tx_error_task() tries to gently disable TX but if it
> times-out then it uses the global switch (TE field in NCR
> register). That sounds racy with #2 NAPI that doesn't grab bp->lock
> and would probably break if the interface is shutdown under its
> feet.
>
> I don't see much more. To fix all that, someone ought to exhaustively go
> through all tasks (#1-6 above) & all shared data and reason one by one.
> Who will be that someone? ;-) But that sounds pretty unrelated to the
> series at hand, no?
>
> I'd agree that some locking of bp->lock around the swap operation would
> improve the series, and I'll add that in V2 for sure!
After some sleep, I feel like my message was a bit rough. To clarify
what I plan for V2:
- grab bp->lock on swap to protect us against some of #1 userspace and
all of #6 IRQ.
- disabling #2 NAPI on swap is already done
- disable all three BH features on swap
That will not fix everything listed above.
On top, we should:
- check/revise our locking strategy for almost all codepaths,
- check all BH features are disabled and blocked upon in the right
codepaths,
- in many bp->lock critical section, we should early exit if !bp->ctx.
>
>>
>>> +}
>>> +
>>> +static void macb_context_swap_end(struct macb *bp,
>>> + struct macb_context *new_ctx)
>>> +{
>>> + struct macb_context *old_ctx;
>>> + struct macb_queue *queue;
>>> + unsigned int q;
>>> + u32 ctrl;
>>> +
>>> + /* Swap contexts & give buffer pointers to HW. */
>>> +
>>> + old_ctx = bp->ctx;
>>> + bp->ctx = new_ctx;
>>> + macb_init_buffers(bp);
>>> +
>>> + /* Start NAPI, HW Tx/Rx and software Tx. */
>>> +
>>> + for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) {
>>> + napi_enable(&queue->napi_rx);
>>> + napi_enable(&queue->napi_tx);
>>> + }
>>> +
>>> + if (!(bp->caps & MACB_CAPS_MACB_IS_EMAC)) {
>>> + for (q = 0, queue = bp->queues; q < bp->num_queues;
>>> + ++q, ++queue) {
>>> + queue_writel(queue, IER,
>>> + bp->rx_intr_mask |
>>> + MACB_TX_INT_FLAGS |
>>> + MACB_BIT(HRESP));
>>> + }
>>> + }
>>> +
>>> + ctrl = macb_readl(bp, NCR);
>>> + macb_writel(bp, NCR, ctrl | MACB_BIT(RE) | MACB_BIT(TE));
>>> +
>>> + netif_tx_start_all_queues(bp->netdev);
>>> +
>>> + /* Free old context. */
>>> +
>>> + macb_free_consistent(old_ctx);
>>
>> 1. kfree(old_ctx) is missing. The context struct itself leaks on
>> every swap.
>
> Agreed.
>
>> 2. macb_close() calls netdev_tx_reset_queue() for each queue.
>> Shouldn't the swap do the same? BQL accounting will be stale
>> after switching to a fresh context.
>
> I explicitely left that out as I thought DQL would benefit from keeping
> past context of the traffic. But indeed as we start afresh from a new
> set of buffers we should reset DQL. fbnic, pointed out as an good
> example by Jakub recently, does that.
>
>>
>> 3. macb_configure_dma() is not called after the swap. For
>> set_ringparam this is probably fine since rx_buffer_size
>> does not change, but this becomes a problem in patch 11.
>
> Indeed, I had missed it took bp->ctx->rx_buffer_size as a parameter.
> Will fix.
Thanks,
--
Théo Lebrun, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com
^ permalink raw reply [flat|nested] 24+ messages in thread
end of thread, other threads:[~2026-04-03 9:04 UTC | newest]
Thread overview: 24+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-01 16:39 [PATCH net-next 00/11] net: macb: implement context swapping Théo Lebrun
2026-04-01 16:39 ` [PATCH net-next 01/11] net: macb: unify device pointer naming convention Théo Lebrun
2026-04-01 16:39 ` [PATCH net-next 02/11] net: macb: unify `struct macb *` " Théo Lebrun
2026-04-01 16:39 ` [PATCH net-next 03/11] net: macb: unify queue index variable naming convention and types Théo Lebrun
2026-04-01 16:39 ` [PATCH net-next 04/11] net: macb: enforce reverse christmas tree (RCT) convention Théo Lebrun
2026-04-01 16:39 ` [PATCH net-next 05/11] net: macb: allocate tieoff descriptor once across device lifetime Théo Lebrun
2026-04-02 11:14 ` Nicolai Buchwitz
2026-04-02 13:57 ` Théo Lebrun
2026-04-01 16:39 ` [PATCH net-next 06/11] net: macb: introduce macb_context struct for buffer management Théo Lebrun
2026-04-02 11:22 ` Nicolai Buchwitz
2026-04-02 14:11 ` Théo Lebrun
2026-04-01 16:39 ` [PATCH net-next 07/11] net: macb: avoid macb_init_rx_buffer_size() modifying state Théo Lebrun
2026-04-01 16:39 ` [PATCH net-next 08/11] net: macb: make `struct macb` subset reachable from macb_context struct Théo Lebrun
2026-04-01 16:39 ` [PATCH net-next 09/11] net: macb: introduce macb_context_alloc() helper Théo Lebrun
2026-04-01 16:39 ` [PATCH net-next 10/11] net: macb: use context swapping in .set_ringparam() Théo Lebrun
2026-04-01 20:17 ` Maxime Chevallier
2026-04-02 16:34 ` Théo Lebrun
2026-04-02 11:29 ` Nicolai Buchwitz
2026-04-02 16:31 ` Théo Lebrun
2026-04-03 9:03 ` Théo Lebrun
2026-04-01 16:39 ` [PATCH net-next 11/11] net: macb: use context swapping in .ndo_change_mtu() Théo Lebrun
2026-04-02 11:30 ` Nicolai Buchwitz
2026-04-02 11:35 ` [PATCH net-next 00/11] net: macb: implement context swapping Nicolai Buchwitz
2026-04-02 13:46 ` Théo Lebrun
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox