From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mout-p-202.mailbox.org (mout-p-202.mailbox.org [80.241.56.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 65C0E2E8DF5; Tue, 12 Aug 2025 08:30:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=80.241.56.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1754987454; cv=none; b=JawGT/WtqJa1fO9KOv8O+VMElWXAWPlR4b8dhf4PEFdP3nMpIF9N1tkrXL3pytqj9ewFVoyg0TBdJihvnp5pMvnvp60I3m1K8jpFYKA5yNWZq5fBXQYcFX/pWGdDq0l8JsGz9WTyrZiHlKLknImELAZ7rDJyEzR6xXyjwVDOWAg= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1754987454; c=relaxed/simple; bh=n7HXRTN6+T28PZLF715DHFDSzvWwZLiorZi00dl0eM0=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=MUW8Qw9luVlTF4tOnYC2MTQCk2M/dFG16GT3EyERvgF17LHTWlfpJhDNyz1dJenQ5fLSIrAnY7CYYWkQep6/aa2j/j7F3OqImpR+03owzW6Hbsquig1rmLp9WHjzd+c0GLbqz0JP5q90nh8bDKpkErawx2MDvS6a4fE2aNQKzIw= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=mailbox.org; spf=pass smtp.mailfrom=mailbox.org; dkim=pass (2048-bit key) header.d=mailbox.org header.i=@mailbox.org header.b=aDPexSmk; dkim=pass (2048-bit key) header.d=mailbox.org header.i=@mailbox.org header.b=B2oS2SJl; arc=none smtp.client-ip=80.241.56.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=mailbox.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=mailbox.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=mailbox.org header.i=@mailbox.org header.b="aDPexSmk"; dkim=pass (2048-bit key) header.d=mailbox.org header.i=@mailbox.org header.b="B2oS2SJl" Received: from smtp2.mailbox.org (smtp2.mailbox.org [10.196.197.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-202.mailbox.org (Postfix) with ESMTPS id 4c1Ppf5zKjz9t7v; Tue, 12 Aug 2025 10:30:50 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mailbox.org; s=mail20150812; t=1754987450; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mNzhLL+0aaq5AWdP11hy4vU0D3itMVGSPn47qoSdlGc=; b=aDPexSmkwUwj0kgkBr9vKeeNtxtsROILPR7MvlQ9isuGjX9ojQ9GOBKs4E/q+TES8CvRdl g2+E2bz8Vc8rRvcc3Soaah/YnkqnVdukeMNnW7ukQ+v4WDGKo+wbSe9vgWqnTabXcyIxjz jFqRXhntPtOLRigqSZup8bAoFR/nNGKMxPsDkAIDY9NukQ/fcPEChCCAc7ae56pHMcVHcN wQ8M+YWQTTlhPcfkvjZKVb1cgFVtlFxtZvzUqDg6QmIDTWKExwBs0SojdgYlHUtbUgv8eP nn6s3BQmdteQQhv64Cfy2RBO1Fdy7jaWQiDPU5SWmIwCIfbqlEBagEvVAtHBCA== From: Lukasz Majewski DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mailbox.org; s=mail20150812; t=1754987448; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mNzhLL+0aaq5AWdP11hy4vU0D3itMVGSPn47qoSdlGc=; b=B2oS2SJl4IWJZvTDNDM//Et2jbWFYEU5MUDChG0FqR9p66fHx290pr1wfTDE/Tc0xgxH5x m/SD0a5YTfnIIvMDzb2Rc6NW66lBOrDB/bz/rQZh1+7KHWnkOlr/LyCpqMeaQU3q2rtBQp 7EYVWeFEqGk8KBg4pOqzb4v4yT8GuRqBuQkG76cl2VYEO0LDVwaNLZ74vSMQzug4iKryKq /sEScT2RBJbrpmIhENNvdseLyyKmDb0c7+DhcKbtf/wGbE8aefKlf3aRYWICpJUvUDSq1u m7Uu/v3+JsBnEgw2M5AfGow7MRRJJyKJQfk2amaUaIYtIwjLyZLUPNXVzoHaig== To: Andrew Lunn , davem@davemloft.net, Eric Dumazet , Jakub Kicinski , Paolo Abeni , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Shawn Guo Cc: Sascha Hauer , Pengutronix Kernel Team , Fabio Estevam , Richard Cochran , netdev@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, imx@lists.linux.dev, linux-arm-kernel@lists.infradead.org, Stefan Wahren , Simon Horman , Lukasz Majewski Subject: [net-next RESEND v17 07/12] net: mtip: Add mtip_switch_{rx|tx} functions to the L2 switch driver Date: Tue, 12 Aug 2025 10:29:34 +0200 Message-Id: <20250812082939.541733-8-lukasz.majewski@mailbox.org> In-Reply-To: <20250812082939.541733-1-lukasz.majewski@mailbox.org> References: <20250812082939.541733-1-lukasz.majewski@mailbox.org> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-MBO-RS-ID: 4822100b4a73352f8f1 X-MBO-RS-META: wip5sc4webywjwtxiycmcs9nq5kdmiqx This patch provides mtip_switch_tx and mtip_switch_rx functions code for MTIP L2 switch. Signed-off-by: Lukasz Majewski --- Changes for v13: - New patch - created by excluding some code from large (i.e. v12 and earlier) MTIP driver Changes for v14: - Rewrite RX error handling code - Remove } else { from if (unlikely(!skb)) { condition in mtip_switch_rx() - Remove locking from RX patch (done under NAPI API and similar to fec_main.c driver) - Use net_prefetch() instead of prefetch() Changes for v15: - Use page_address() instead of __va() - Remove the check if data is NOT null, as it cannot be (those values are assured to be allocated earlier for RX path). Changes for v16: - Disable RX interrupt when in switch RX function - Set offload_fwd_mark when L2 offloading is enabled (fix broadcast flooding) - Replace spin_{un}lock() with _bh variant Changes for v17: - None --- .../net/ethernet/freescale/mtipsw/mtipl2sw.c | 240 +++++++++++++++++- 1 file changed, 239 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/freescale/mtipsw/mtipl2sw.c b/drivers/net/ethernet/freescale/mtipsw/mtipl2sw.c index 39a4997fc8fe..0f7d2f479161 100644 --- a/drivers/net/ethernet/freescale/mtipsw/mtipl2sw.c +++ b/drivers/net/ethernet/freescale/mtipsw/mtipl2sw.c @@ -228,6 +228,39 @@ struct mtip_port_info *mtip_portinfofifo_read(struct switch_enet_private *fep) return info; } +static void mtip_atable_get_entry_port_number(struct switch_enet_private *fep, + unsigned char *mac_addr, u8 *port) +{ + int block_index, block_index_end, entry; + u32 mac_addr_lo, mac_addr_hi; + u32 read_lo, read_hi; + + mac_addr_lo = (u32)((mac_addr[3] << 24) | (mac_addr[2] << 16) | + (mac_addr[1] << 8) | mac_addr[0]); + mac_addr_hi = (u32)((mac_addr[5] << 8) | (mac_addr[4])); + + block_index = GET_BLOCK_PTR(crc8_calc(mac_addr)); + block_index_end = block_index + ATABLE_ENTRY_PER_SLOT; + + /* now search all the entries in the selected block */ + for (entry = block_index; entry < block_index_end; entry++) { + mtip_read_atable(fep, entry, &read_lo, &read_hi); + *port = MTIP_PORT_FORWARDING_INIT; + + if (read_lo == mac_addr_lo && + ((read_hi & 0x0000FFFF) == + (mac_addr_hi & 0x0000FFFF))) { + /* found the correct address */ + if ((read_hi & (1 << 16)) && (!(read_hi & (1 << 17)))) + *port = FIELD_GET(AT_PORT_MASK, read_hi); + break; + } + } + + dev_dbg(&fep->pdev->dev, "%s: MAC: %pM PORT: 0x%x\n", __func__, + mac_addr, *port); +} + /* Clear complete MAC Look Up Table */ void mtip_clear_atable(struct switch_enet_private *fep) { @@ -810,11 +843,216 @@ static irqreturn_t mtip_interrupt(int irq, void *ptr_fep) static void mtip_switch_tx(struct net_device *dev) { + struct mtip_ndev_priv *priv = netdev_priv(dev); + struct switch_enet_private *fep = priv->fep; + unsigned short status; + struct sk_buff *skb; + struct cbd_t *bdp; + + spin_lock_bh(&fep->hw_lock); + bdp = fep->dirty_tx; + + while (((status = bdp->cbd_sc) & BD_ENET_TX_READY) == 0) { + if (bdp == fep->cur_tx && fep->tx_full == 0) + break; + + dma_unmap_single(&fep->pdev->dev, bdp->cbd_bufaddr, + MTIP_SWITCH_TX_FRSIZE, DMA_TO_DEVICE); + bdp->cbd_bufaddr = 0; + skb = fep->tx_skbuff[fep->skb_dirty]; + /* Check for errors */ + if (status & (BD_ENET_TX_HB | BD_ENET_TX_LC | + BD_ENET_TX_RL | BD_ENET_TX_UN | + BD_ENET_TX_CSL)) { + dev->stats.tx_errors++; + if (status & BD_ENET_TX_HB) /* No heartbeat */ + dev->stats.tx_heartbeat_errors++; + if (status & BD_ENET_TX_LC) /* Late collision */ + dev->stats.tx_window_errors++; + if (status & BD_ENET_TX_RL) /* Retrans limit */ + dev->stats.tx_aborted_errors++; + if (status & BD_ENET_TX_UN) /* Underrun */ + dev->stats.tx_fifo_errors++; + if (status & BD_ENET_TX_CSL) /* Carrier lost */ + dev->stats.tx_carrier_errors++; + } else { + dev->stats.tx_packets++; + } + + if (status & BD_ENET_TX_READY) + dev_err(&fep->pdev->dev, + "Enet xmit interrupt and TX_READY.\n"); + + /* Deferred means some collisions occurred during transmit, + * but we eventually sent the packet OK. + */ + if (status & BD_ENET_TX_DEF) + dev->stats.collisions++; + + /* Free the sk buffer associated with this last transmit */ + dev_consume_skb_irq(skb); + fep->tx_skbuff[fep->skb_dirty] = NULL; + fep->skb_dirty = (fep->skb_dirty + 1) & TX_RING_MOD_MASK; + + /* Update pointer to next buffer descriptor to be transmitted */ + if (status & BD_ENET_TX_WRAP) + bdp = fep->tx_bd_base; + else + bdp++; + + /* Since we have freed up a buffer, the ring is no longer + * full. + */ + if (fep->tx_full) { + fep->tx_full = 0; + if (netif_queue_stopped(dev)) + netif_wake_queue(dev); + } + } + fep->dirty_tx = bdp; + spin_unlock_bh(&fep->hw_lock); } +/* During a receive, the cur_rx points to the current incoming buffer. + * When we update through the ring, if the next incoming buffer has + * not been given to the system, we just set the empty indicator, + * effectively tossing the packet. + */ static int mtip_switch_rx(struct net_device *dev, int budget, int *port) { - return -ENOMEM; + struct mtip_ndev_priv *priv = netdev_priv(dev); + u8 *data, rx_port = MTIP_PORT_FORWARDING_INIT; + struct switch_enet_private *fep = priv->fep; + unsigned short status, pkt_len; + struct net_device *pndev; + struct ethhdr *eth_hdr; + int pkt_received = 0; + struct sk_buff *skb; + struct cbd_t *bdp; + struct page *page; + + /* First, grab all of the stats for the incoming packet. + * These get messed up if we get called due to a busy condition. + */ + bdp = fep->cur_rx; + + while (!((status = bdp->cbd_sc) & BD_ENET_RX_EMPTY)) { + if (pkt_received >= budget) + break; + + pkt_received++; + + writel(MCF_ESW_IMR_RXF, fep->hwp + ESW_ISR); + if (!fep->usage_count) + goto rx_processing_done; + + status ^= BD_ENET_RX_LAST; + /* Check for errors. */ + if (status & (BD_ENET_RX_LG | BD_ENET_RX_SH | BD_ENET_RX_NO | + BD_ENET_RX_CR | BD_ENET_RX_OV | BD_ENET_RX_LAST | + BD_ENET_RX_CL)) { + dev->stats.rx_errors++; + if (status & BD_ENET_RX_OV) { + /* FIFO overrun */ + dev->stats.rx_fifo_errors++; + goto rx_processing_done; + } + if (status & (BD_ENET_RX_LG | BD_ENET_RX_SH + | BD_ENET_RX_LAST)) { + /* Frame too long or too short. */ + dev->stats.rx_length_errors++; + if (status & BD_ENET_RX_LAST) + netdev_err(dev, "rcv is not +last\n"); + } + if (status & BD_ENET_RX_CR) /* CRC Error */ + dev->stats.rx_crc_errors++; + + /* Report late collisions as a frame error. */ + if (status & (BD_ENET_RX_NO | BD_ENET_RX_CL)) + dev->stats.rx_frame_errors++; + goto rx_processing_done; + } + + /* Get correct RX page */ + page = fep->page[bdp - fep->rx_bd_base]; + /* Process the incoming frame */ + pkt_len = bdp->cbd_datlen; + + dma_sync_single_for_cpu(&fep->pdev->dev, bdp->cbd_bufaddr, + pkt_len, DMA_FROM_DEVICE); + net_prefetch(page_address(page)); + data = page_address(page); + + if (fep->quirks & FEC_QUIRK_SWAP_FRAME) + swap_buffer(data, pkt_len); + + eth_hdr = (struct ethhdr *)data; + mtip_atable_get_entry_port_number(fep, eth_hdr->h_source, + &rx_port); + if (rx_port == MTIP_PORT_FORWARDING_INIT) + mtip_atable_dynamicms_learn_migration(fep, + mtip_get_time(), + eth_hdr->h_source, + &rx_port); + + if ((rx_port == 1 || rx_port == 2) && fep->ndev[rx_port - 1]) + pndev = fep->ndev[rx_port - 1]; + else + pndev = dev; + + *port = rx_port; + + /* This does 16 byte alignment, exactly what we need. + * The packet length includes FCS, but we don't want to + * include that when passing upstream as it messes up + * bridging applications. + */ + skb = netdev_alloc_skb(pndev, pkt_len + NET_IP_ALIGN); + if (unlikely(!skb)) { + dev_dbg(&fep->pdev->dev, + "%s: Memory squeeze, dropping packet.\n", + pndev->name); + page_pool_recycle_direct(fep->page_pool, page); + pndev->stats.rx_dropped++; + return -ENOMEM; + } + + skb_reserve(skb, NET_IP_ALIGN); + skb_put(skb, pkt_len); /* Make room */ + skb_copy_to_linear_data(skb, data, pkt_len); + skb->protocol = eth_type_trans(skb, pndev); + skb->offload_fwd_mark = fep->br_offload; + napi_gro_receive(&fep->napi, skb); + + pndev->stats.rx_packets++; + pndev->stats.rx_bytes += pkt_len; + + rx_processing_done: + /* Clear the status flags for this buffer */ + status &= ~BD_ENET_RX_STATS; + + /* Mark the buffer empty */ + status |= BD_ENET_RX_EMPTY; + /* Make sure that updates to the descriptor are performed */ + wmb(); + bdp->cbd_sc = status; + + /* Update BD pointer to next entry */ + if (status & BD_ENET_RX_WRAP) + bdp = fep->rx_bd_base; + else + bdp++; + + /* Doing this here will keep the FEC running while we process + * incoming frames. On a heavily loaded network, we should be + * able to keep up at the expense of system resources. + */ + writel(MCF_ESW_RDAR_R_DES_ACTIVE, fep->hwp + ESW_RDAR); + } /* while (!((status = bdp->cbd_sc) & BD_ENET_RX_EMPTY)) */ + + fep->cur_rx = bdp; + + return pkt_received; } static void mtip_adjust_link(struct net_device *dev) -- 2.39.5