From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C7E06C87FCC for ; Sun, 27 Jul 2025 10:22:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=FdrMcrtyLRk72uuft9udmB39tGsGg1H0xmizzRsTyQs=; b=OWs54+lrFcTNVLdloNT77NH37z bYBin+h0hnzh2b8UTkz1zLTeMqCNe052UVQPJRzROG+paxD9jxLP/bGexsK/KuGQxomP7ZdULhFYc xwBsW8k5UUEGQnmbiyC8YGNlqFh7RXz8yMxC0eWzk9nW04L/Nc7Orz4N58HaOK++ZHeJ936g+0Wtt sNx1AD3ZdmtO0QmbYh0SVdzXJ5XXfLh8ofI+ThfuiBhQLEyekNdzYa9SC0ZIlfmUtpP8JD5StJOZL vrw/WSOS1sgL0dNNX9jgMtnDvrBngnF/KKw3DmoN0vtHMumZMP1eLYZWgAXFQ8WYf6i1usM9ScyH/ 3TjL/TtA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1ufyWe-0000000Cbaq-2SdR; Sun, 27 Jul 2025 10:22:40 +0000 Received: from mx.denx.de ([2a03:4000:64:cc:545d:19ff:fe05:8172]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1ufyCo-0000000CZeG-0U9n for linux-arm-kernel@lists.infradead.org; Sun, 27 Jul 2025 10:02:11 +0000 Received: from [127.0.0.1] (localhost [127.0.0.1]) by localhost (Mailerdaemon) with ESMTPSA id CF2381038C12A; Sun, 27 Jul 2025 12:02:06 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=denx.de; s=mx-20241105; t=1753610528; h=from:subject:date:message-id:to:cc:mime-version: content-transfer-encoding:in-reply-to:references; bh=FdrMcrtyLRk72uuft9udmB39tGsGg1H0xmizzRsTyQs=; b=HVCoCzPMWPHQLC4QJLbr8l8z0ocF9aaa980tv3PM4iWz1rKlyFjOhZLwwaOJ0MDpkTOolA MMWB8TyTNFUwJ1cKS5r2APubtXS9X1/ZhbLCS/J2YsPQM57fgJLA0uFNndm0EBSLj1v/WJ bWY81200E2kJwmiv65NFb+NKmwUmlm+zZlcFdmp1onXLGkM456IjvOzSk2nK9jm3WyqUZZ IPmnwT68J26uMYwgkv0GJhBzyq8MvXMcFYvaPJLXADxvUmhg2TMwCizqm5hum1IRCgcLJ3 5wI5EldKHvb2y8XGZO6Ij3ydjqdEpGd6n2N7jq+GfSwTxraywZ7FKQcOFN/pgg== From: Lukasz Majewski To: Andrew Lunn , davem@davemloft.net, Eric Dumazet , Jakub Kicinski , Paolo Abeni , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Shawn Guo Cc: Sascha Hauer , Pengutronix Kernel Team , Fabio Estevam , Richard Cochran , netdev@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, imx@lists.linux.dev, linux-arm-kernel@lists.infradead.org, Stefan Wahren , Simon Horman , Lukasz Majewski Subject: [net-next v17 07/12] net: mtip: Add mtip_switch_{rx|tx} functions to the L2 switch driver Date: Sun, 27 Jul 2025 12:01:23 +0200 Message-Id: <20250727100128.1411514-8-lukma@denx.de> X-Mailer: git-send-email 2.39.5 In-Reply-To: <20250727100128.1411514-1-lukma@denx.de> References: <20250727100128.1411514-1-lukma@denx.de> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Last-TLS-Session-Version: TLSv1.3 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250727_030210_448488_037C9BE4 X-CRM114-Status: GOOD ( 31.17 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org This patch provides mtip_switch_tx and mtip_switch_rx functions code for MTIP L2 switch. Signed-off-by: Lukasz Majewski --- Changes for v13: - New patch - created by excluding some code from large (i.e. v12 and earlier) MTIP driver Changes for v14: - Rewrite RX error handling code - Remove } else { from if (unlikely(!skb)) { condition in mtip_switch_rx() - Remove locking from RX patch (done under NAPI API and similar to fec_main.c driver) - Use net_prefetch() instead of prefetch() Changes for v15: - Use page_address() instead of __va() - Remove the check if data is NOT null, as it cannot be (those values are assured to be allocated earlier for RX path). Changes for v16: - Disable RX interrupt when in switch RX function - Set offload_fwd_mark when L2 offloading is enabled (fix broadcast flooding) - Replace spin_{un}lock() with _bh variant Changes for v17: - None --- .../net/ethernet/freescale/mtipsw/mtipl2sw.c | 240 +++++++++++++++++- 1 file changed, 239 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/freescale/mtipsw/mtipl2sw.c b/drivers/net/ethernet/freescale/mtipsw/mtipl2sw.c index 39a4997fc8fe..0f7d2f479161 100644 --- a/drivers/net/ethernet/freescale/mtipsw/mtipl2sw.c +++ b/drivers/net/ethernet/freescale/mtipsw/mtipl2sw.c @@ -228,6 +228,39 @@ struct mtip_port_info *mtip_portinfofifo_read(struct switch_enet_private *fep) return info; } +static void mtip_atable_get_entry_port_number(struct switch_enet_private *fep, + unsigned char *mac_addr, u8 *port) +{ + int block_index, block_index_end, entry; + u32 mac_addr_lo, mac_addr_hi; + u32 read_lo, read_hi; + + mac_addr_lo = (u32)((mac_addr[3] << 24) | (mac_addr[2] << 16) | + (mac_addr[1] << 8) | mac_addr[0]); + mac_addr_hi = (u32)((mac_addr[5] << 8) | (mac_addr[4])); + + block_index = GET_BLOCK_PTR(crc8_calc(mac_addr)); + block_index_end = block_index + ATABLE_ENTRY_PER_SLOT; + + /* now search all the entries in the selected block */ + for (entry = block_index; entry < block_index_end; entry++) { + mtip_read_atable(fep, entry, &read_lo, &read_hi); + *port = MTIP_PORT_FORWARDING_INIT; + + if (read_lo == mac_addr_lo && + ((read_hi & 0x0000FFFF) == + (mac_addr_hi & 0x0000FFFF))) { + /* found the correct address */ + if ((read_hi & (1 << 16)) && (!(read_hi & (1 << 17)))) + *port = FIELD_GET(AT_PORT_MASK, read_hi); + break; + } + } + + dev_dbg(&fep->pdev->dev, "%s: MAC: %pM PORT: 0x%x\n", __func__, + mac_addr, *port); +} + /* Clear complete MAC Look Up Table */ void mtip_clear_atable(struct switch_enet_private *fep) { @@ -810,11 +843,216 @@ static irqreturn_t mtip_interrupt(int irq, void *ptr_fep) static void mtip_switch_tx(struct net_device *dev) { + struct mtip_ndev_priv *priv = netdev_priv(dev); + struct switch_enet_private *fep = priv->fep; + unsigned short status; + struct sk_buff *skb; + struct cbd_t *bdp; + + spin_lock_bh(&fep->hw_lock); + bdp = fep->dirty_tx; + + while (((status = bdp->cbd_sc) & BD_ENET_TX_READY) == 0) { + if (bdp == fep->cur_tx && fep->tx_full == 0) + break; + + dma_unmap_single(&fep->pdev->dev, bdp->cbd_bufaddr, + MTIP_SWITCH_TX_FRSIZE, DMA_TO_DEVICE); + bdp->cbd_bufaddr = 0; + skb = fep->tx_skbuff[fep->skb_dirty]; + /* Check for errors */ + if (status & (BD_ENET_TX_HB | BD_ENET_TX_LC | + BD_ENET_TX_RL | BD_ENET_TX_UN | + BD_ENET_TX_CSL)) { + dev->stats.tx_errors++; + if (status & BD_ENET_TX_HB) /* No heartbeat */ + dev->stats.tx_heartbeat_errors++; + if (status & BD_ENET_TX_LC) /* Late collision */ + dev->stats.tx_window_errors++; + if (status & BD_ENET_TX_RL) /* Retrans limit */ + dev->stats.tx_aborted_errors++; + if (status & BD_ENET_TX_UN) /* Underrun */ + dev->stats.tx_fifo_errors++; + if (status & BD_ENET_TX_CSL) /* Carrier lost */ + dev->stats.tx_carrier_errors++; + } else { + dev->stats.tx_packets++; + } + + if (status & BD_ENET_TX_READY) + dev_err(&fep->pdev->dev, + "Enet xmit interrupt and TX_READY.\n"); + + /* Deferred means some collisions occurred during transmit, + * but we eventually sent the packet OK. + */ + if (status & BD_ENET_TX_DEF) + dev->stats.collisions++; + + /* Free the sk buffer associated with this last transmit */ + dev_consume_skb_irq(skb); + fep->tx_skbuff[fep->skb_dirty] = NULL; + fep->skb_dirty = (fep->skb_dirty + 1) & TX_RING_MOD_MASK; + + /* Update pointer to next buffer descriptor to be transmitted */ + if (status & BD_ENET_TX_WRAP) + bdp = fep->tx_bd_base; + else + bdp++; + + /* Since we have freed up a buffer, the ring is no longer + * full. + */ + if (fep->tx_full) { + fep->tx_full = 0; + if (netif_queue_stopped(dev)) + netif_wake_queue(dev); + } + } + fep->dirty_tx = bdp; + spin_unlock_bh(&fep->hw_lock); } +/* During a receive, the cur_rx points to the current incoming buffer. + * When we update through the ring, if the next incoming buffer has + * not been given to the system, we just set the empty indicator, + * effectively tossing the packet. + */ static int mtip_switch_rx(struct net_device *dev, int budget, int *port) { - return -ENOMEM; + struct mtip_ndev_priv *priv = netdev_priv(dev); + u8 *data, rx_port = MTIP_PORT_FORWARDING_INIT; + struct switch_enet_private *fep = priv->fep; + unsigned short status, pkt_len; + struct net_device *pndev; + struct ethhdr *eth_hdr; + int pkt_received = 0; + struct sk_buff *skb; + struct cbd_t *bdp; + struct page *page; + + /* First, grab all of the stats for the incoming packet. + * These get messed up if we get called due to a busy condition. + */ + bdp = fep->cur_rx; + + while (!((status = bdp->cbd_sc) & BD_ENET_RX_EMPTY)) { + if (pkt_received >= budget) + break; + + pkt_received++; + + writel(MCF_ESW_IMR_RXF, fep->hwp + ESW_ISR); + if (!fep->usage_count) + goto rx_processing_done; + + status ^= BD_ENET_RX_LAST; + /* Check for errors. */ + if (status & (BD_ENET_RX_LG | BD_ENET_RX_SH | BD_ENET_RX_NO | + BD_ENET_RX_CR | BD_ENET_RX_OV | BD_ENET_RX_LAST | + BD_ENET_RX_CL)) { + dev->stats.rx_errors++; + if (status & BD_ENET_RX_OV) { + /* FIFO overrun */ + dev->stats.rx_fifo_errors++; + goto rx_processing_done; + } + if (status & (BD_ENET_RX_LG | BD_ENET_RX_SH + | BD_ENET_RX_LAST)) { + /* Frame too long or too short. */ + dev->stats.rx_length_errors++; + if (status & BD_ENET_RX_LAST) + netdev_err(dev, "rcv is not +last\n"); + } + if (status & BD_ENET_RX_CR) /* CRC Error */ + dev->stats.rx_crc_errors++; + + /* Report late collisions as a frame error. */ + if (status & (BD_ENET_RX_NO | BD_ENET_RX_CL)) + dev->stats.rx_frame_errors++; + goto rx_processing_done; + } + + /* Get correct RX page */ + page = fep->page[bdp - fep->rx_bd_base]; + /* Process the incoming frame */ + pkt_len = bdp->cbd_datlen; + + dma_sync_single_for_cpu(&fep->pdev->dev, bdp->cbd_bufaddr, + pkt_len, DMA_FROM_DEVICE); + net_prefetch(page_address(page)); + data = page_address(page); + + if (fep->quirks & FEC_QUIRK_SWAP_FRAME) + swap_buffer(data, pkt_len); + + eth_hdr = (struct ethhdr *)data; + mtip_atable_get_entry_port_number(fep, eth_hdr->h_source, + &rx_port); + if (rx_port == MTIP_PORT_FORWARDING_INIT) + mtip_atable_dynamicms_learn_migration(fep, + mtip_get_time(), + eth_hdr->h_source, + &rx_port); + + if ((rx_port == 1 || rx_port == 2) && fep->ndev[rx_port - 1]) + pndev = fep->ndev[rx_port - 1]; + else + pndev = dev; + + *port = rx_port; + + /* This does 16 byte alignment, exactly what we need. + * The packet length includes FCS, but we don't want to + * include that when passing upstream as it messes up + * bridging applications. + */ + skb = netdev_alloc_skb(pndev, pkt_len + NET_IP_ALIGN); + if (unlikely(!skb)) { + dev_dbg(&fep->pdev->dev, + "%s: Memory squeeze, dropping packet.\n", + pndev->name); + page_pool_recycle_direct(fep->page_pool, page); + pndev->stats.rx_dropped++; + return -ENOMEM; + } + + skb_reserve(skb, NET_IP_ALIGN); + skb_put(skb, pkt_len); /* Make room */ + skb_copy_to_linear_data(skb, data, pkt_len); + skb->protocol = eth_type_trans(skb, pndev); + skb->offload_fwd_mark = fep->br_offload; + napi_gro_receive(&fep->napi, skb); + + pndev->stats.rx_packets++; + pndev->stats.rx_bytes += pkt_len; + + rx_processing_done: + /* Clear the status flags for this buffer */ + status &= ~BD_ENET_RX_STATS; + + /* Mark the buffer empty */ + status |= BD_ENET_RX_EMPTY; + /* Make sure that updates to the descriptor are performed */ + wmb(); + bdp->cbd_sc = status; + + /* Update BD pointer to next entry */ + if (status & BD_ENET_RX_WRAP) + bdp = fep->rx_bd_base; + else + bdp++; + + /* Doing this here will keep the FEC running while we process + * incoming frames. On a heavily loaded network, we should be + * able to keep up at the expense of system resources. + */ + writel(MCF_ESW_RDAR_R_DES_ACTIVE, fep->hwp + ESW_RDAR); + } /* while (!((status = bdp->cbd_sc) & BD_ENET_RX_EMPTY)) */ + + fep->cur_rx = bdp; + + return pkt_received; } static void mtip_adjust_link(struct net_device *dev) -- 2.39.5