From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from esa.microchip.iphmx.com (esa.microchip.iphmx.com [68.232.153.233]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 452053E120C; Mon, 4 May 2026 14:24:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=68.232.153.233 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777904655; cv=none; b=SfxP1/9nmELmvxuCBHkLD+IWCdVf6OPOIEEAkgm+3TOhNKTdAptcKUeYAWIwfhjZrXArx9GBmPYZQmoO4IojlzV2xVmjSyVyxM3CWI9jZivwdOL94J8JYINpvfxnst1115xGwjLDZVGL9XIZLKZDWAFbPjXqXwiD70ZiTg21HyY= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777904655; c=relaxed/simple; bh=34HxShGNp6AM+Ek1mfdVmgUUFdPuBlLp4/psA+CKGSs=; h=From:Date:Subject:MIME-Version:Content-Type:Message-ID:References: In-Reply-To:To:CC; b=ItQ8uteo0L4lGIu6BkhbegWYe8qjJ/yEWyEruNNRld6re1iJHfah9Jrn/JHUHcY8vQCM9YzEt81sA8TaxhHdtQ0rRQn7Z5hgiDsVTSiAREPaJbPbjQwK9GkbbBVcy5GT/mbbAqyRTqCnZkivYrJy16IcRMr0PooR48YeM0V0Jls= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=microchip.com; spf=pass smtp.mailfrom=microchip.com; dkim=pass (2048-bit key) header.d=microchip.com header.i=@microchip.com header.b=vCfGwOup; arc=none smtp.client-ip=68.232.153.233 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=microchip.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=microchip.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=microchip.com header.i=@microchip.com header.b="vCfGwOup" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=microchip.com; i=@microchip.com; q=dns/txt; s=mchp; t=1777904654; x=1809440654; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=34HxShGNp6AM+Ek1mfdVmgUUFdPuBlLp4/psA+CKGSs=; b=vCfGwOupnO+3bXDp07OKkXvxx3NG6D3jzOTji4GFz7k7xQtc9Qvu7rVT 4ywTtxl1mq9dIjqSQHICoqOqOD776i0gnIb0xR7HVZUpoqGF1Z6ma+ikn QZHPA0K31AoPpG82/8b9nihbWOWCVFlTGf4AIDPdAWY/u/DGZM5tNdIvT qTmU2pW3rwM2/TQkp1jz2X2Tr76tkDqNYiZed99RqhsXWevfvsiTfAkQa teBJgeT6tq/ZwPv2zBmiW30Y/RmfHTDI0DiPYb2ebQgZtPu9Yvs8D5tmn CdQGmDy3F9j+QmBA9fe8vNmJxnNBIfsPtvLil3H9dl6VjJmvVQyruyE4g g==; X-CSE-ConnectionGUID: 51bU0/zKTg6uSyhshaOTmw== X-CSE-MsgGUID: km/LzJ01QRSdx5gLrgvvVA== X-IronPort-AV: E=Sophos;i="6.23,215,1770620400"; d="scan'208";a="288370145" X-Amp-Result: SKIPPED(no attachment in message) Received: from unknown (HELO email.microchip.com) ([170.129.1.10]) by esa5.microchip.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2026 07:24:14 -0700 Received: from chn-vm-ex01.mchp-main.com (10.10.87.71) by chn-vm-ex3.mchp-main.com (10.10.87.32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.2.2562.37; Mon, 4 May 2026 07:24:13 -0700 Received: from DEN-DL-M70577.microsemi.net (10.10.85.11) by chn-vm-ex01.mchp-main.com (10.10.85.143) with Microsoft SMTP Server id 15.1.2507.58 via Frontend Transport; Mon, 4 May 2026 07:24:09 -0700 From: Daniel Machon Date: Mon, 4 May 2026 16:23:23 +0200 Subject: [PATCH net-next v3 10/13] net: lan966x: add PCIe FDMA MTU change support Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Message-ID: <20260504-lan966x-pci-fdma-v3-10-a56f5740d870@microchip.com> References: <20260504-lan966x-pci-fdma-v3-0-a56f5740d870@microchip.com> In-Reply-To: <20260504-lan966x-pci-fdma-v3-0-a56f5740d870@microchip.com> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Horatiu Vultur , Steen Hegelund , , "Alexei Starovoitov" , Daniel Borkmann , "Jesper Dangaard Brouer" , John Fastabend , Stanislav Fomichev , Herve Codina , Arnd Bergmann , Greg Kroah-Hartman , Mohsin Bashir CC: , , , X-Mailer: b4 0.14.3 Add MTU change support for the PCIe FDMA path. When the MTU changes, the contiguous ATU-mapped RX and TX buffers are reallocated with the new size. On allocation failure, the existing buffers are reused after being reset. Cap the PCIe DCB ring at 256 (FDMA_PCI_DCB_MAX) to keep the entire contiguous allocation under MAX_PAGE_ORDER at jumbo MTU, which 512 DCBs would overflow. Tested-by: Herve Codina Signed-off-by: Daniel Machon --- .../ethernet/microchip/lan966x/lan966x_fdma_pci.c | 157 ++++++++++++++++++++- 1 file changed, 154 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_fdma_pci.c b/drivers/net/ethernet/microchip/lan966x/lan966x_fdma_pci.c index 2c5488046077..491ddc337760 100644 --- a/drivers/net/ethernet/microchip/lan966x/lan966x_fdma_pci.c +++ b/drivers/net/ethernet/microchip/lan966x/lan966x_fdma_pci.c @@ -3,6 +3,11 @@ #include "fdma_api.h" #include "lan966x_main.h" +/* Ring must fit in one MAX_PAGE_ORDER DMA block; 512 DCBs overflows + * at jumbo MTU. + */ +#define FDMA_PCI_DCB_MAX 256 + static int lan966x_fdma_pci_dataptr_cb(struct fdma *fdma, int dcb, int db, u64 *dataptr) { @@ -321,7 +326,7 @@ static int lan966x_fdma_pci_init(struct lan966x *lan966x) lan966x->rx.lan966x = lan966x; lan966x->rx.max_mtu = lan966x_fdma_get_max_frame(lan966x); rx_fdma->channel_id = FDMA_XTR_CHANNEL; - rx_fdma->n_dcbs = FDMA_DCB_MAX; + rx_fdma->n_dcbs = FDMA_PCI_DCB_MAX; rx_fdma->n_dbs = FDMA_RX_DCB_MAX_DBS; rx_fdma->priv = lan966x; rx_fdma->db_size = FDMA_PCI_DB_SIZE(lan966x->rx.max_mtu); @@ -331,7 +336,7 @@ static int lan966x_fdma_pci_init(struct lan966x *lan966x) lan966x->tx.lan966x = lan966x; tx_fdma->channel_id = FDMA_INJ_CHANNEL; - tx_fdma->n_dcbs = FDMA_DCB_MAX; + tx_fdma->n_dcbs = FDMA_PCI_DCB_MAX; tx_fdma->n_dbs = FDMA_TX_DCB_MAX_DBS; tx_fdma->priv = lan966x; tx_fdma->db_size = FDMA_PCI_DB_SIZE(lan966x->rx.max_mtu); @@ -354,9 +359,155 @@ static int lan966x_fdma_pci_init(struct lan966x *lan966x) return 0; } +/* Reset existing rx and tx buffers. */ +static void lan966x_fdma_pci_reset_mem(struct lan966x *lan966x) +{ + struct lan966x_rx *rx = &lan966x->rx; + struct lan966x_tx *tx = &lan966x->tx; + + memset(rx->fdma.dcbs, 0, rx->fdma.size); + memset(tx->fdma.dcbs, 0, tx->fdma.size); + + fdma_dcbs_init(&rx->fdma, + FDMA_DCB_INFO_DATAL(rx->fdma.db_size), + FDMA_DCB_STATUS_INTR); + + fdma_dcbs_init(&tx->fdma, + FDMA_DCB_INFO_DATAL(tx->fdma.db_size), + FDMA_DCB_STATUS_DONE); + + lan966x_fdma_llp_configure(lan966x, + tx->fdma.atu_region->base_addr, + tx->fdma.channel_id); + lan966x_fdma_llp_configure(lan966x, + rx->fdma.atu_region->base_addr, + rx->fdma.channel_id); +} + +/* Drain in-flight xmit callers and stop all TX queues on every port. */ +static void lan966x_fdma_pci_stop_netdev(struct lan966x *lan966x) +{ + for (int i = 0; i < lan966x->num_phys_ports; ++i) { + struct lan966x_port *port = lan966x->ports[i]; + + if (port) + netif_tx_disable(port->dev); + } +} + +/* Wake all TX queues on every port (undoes lan966x_fdma_pci_stop_netdev). */ +static void lan966x_fdma_pci_wakeup_netdev(struct lan966x *lan966x) +{ + for (int i = 0; i < lan966x->num_phys_ports; ++i) { + struct lan966x_port *port = lan966x->ports[i]; + + if (port) + netif_tx_wake_all_queues(port->dev); + } +} + +static int lan966x_fdma_pci_reload(struct lan966x *lan966x, int new_mtu) +{ + struct fdma tx_fdma_old = lan966x->tx.fdma; + struct fdma rx_fdma_old = lan966x->rx.fdma; + u32 old_mtu = lan966x->rx.max_mtu; + int err; + + napi_synchronize(&lan966x->napi); + napi_disable(&lan966x->napi); + lan966x_fdma_pci_stop_netdev(lan966x); + lan966x_fdma_rx_disable(&lan966x->rx); + lan966x_fdma_tx_disable(&lan966x->tx); + + lan966x->rx.max_mtu = new_mtu; + + lan966x->tx.fdma.db_size = FDMA_PCI_DB_SIZE(lan966x->rx.max_mtu); + lan966x->tx.fdma.size = fdma_get_size_contiguous(&lan966x->tx.fdma); + lan966x->rx.fdma.db_size = FDMA_PCI_DB_SIZE(lan966x->rx.max_mtu); + lan966x->rx.fdma.size = fdma_get_size_contiguous(&lan966x->rx.fdma); + + err = lan966x_fdma_pci_rx_alloc(&lan966x->rx); + if (err) + goto restore; + + err = lan966x_fdma_pci_tx_alloc(&lan966x->tx); + if (err) { + fdma_free_coherent_and_unmap(lan966x->dev, &lan966x->rx.fdma); + goto restore; + } + + /* Free and unmap old memory. */ + fdma_free_coherent_and_unmap(lan966x->dev, &rx_fdma_old); + fdma_free_coherent_and_unmap(lan966x->dev, &tx_fdma_old); + + /* Keep this order: rx_start, wakeup_netdev, napi_enable. */ + lan966x_fdma_rx_start(&lan966x->rx); + lan966x_fdma_pci_wakeup_netdev(lan966x); + napi_enable(&lan966x->napi); + + return err; +restore: + + /* No new buffers are allocated at this point. Use the old buffers, + * but reset them before starting the FDMA again. + */ + + memcpy(&lan966x->tx.fdma, &tx_fdma_old, sizeof(struct fdma)); + memcpy(&lan966x->rx.fdma, &rx_fdma_old, sizeof(struct fdma)); + + lan966x->rx.max_mtu = old_mtu; + + lan966x_fdma_pci_reset_mem(lan966x); + + /* Keep this order: rx_start, wakeup_netdev, napi_enable. */ + lan966x_fdma_rx_start(&lan966x->rx); + lan966x_fdma_pci_wakeup_netdev(lan966x); + napi_enable(&lan966x->napi); + + return err; +} + +static int __lan966x_fdma_pci_reload(struct lan966x *lan966x, int max_mtu) +{ + int err; + u32 val; + + /* Disable the CPU port. */ + lan_rmw(QSYS_SW_PORT_MODE_PORT_ENA_SET(0), + QSYS_SW_PORT_MODE_PORT_ENA, + lan966x, QSYS_SW_PORT_MODE(CPU_PORT)); + + /* Flush the CPU queues. */ + readx_poll_timeout(lan966x_qsys_sw_status, + lan966x, + val, + !(QSYS_SW_STATUS_EQ_AVAIL_GET(val)), + READL_SLEEP_US, READL_TIMEOUT_US); + + /* Add a sleep in case there are frames between the queues and the CPU + * port + */ + usleep_range(USEC_PER_MSEC, 2 * USEC_PER_MSEC); + + err = lan966x_fdma_pci_reload(lan966x, max_mtu); + + /* Enable back the CPU port. */ + lan_rmw(QSYS_SW_PORT_MODE_PORT_ENA_SET(1), + QSYS_SW_PORT_MODE_PORT_ENA, + lan966x, QSYS_SW_PORT_MODE(CPU_PORT)); + + return err; +} + static int lan966x_fdma_pci_resize(struct lan966x *lan966x) { - return -EOPNOTSUPP; + int max_mtu; + + max_mtu = lan966x_fdma_get_max_frame(lan966x); + if (max_mtu == lan966x->rx.max_mtu) + return 0; + + return __lan966x_fdma_pci_reload(lan966x, max_mtu); } static void lan966x_fdma_pci_deinit(struct lan966x *lan966x) -- 2.34.1