From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B5B921E832C; Thu, 17 Apr 2025 18:14:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744913676; cv=none; b=uXo9c6U/ex7+c/CBSwR9A4GzMe/atpSGjuS2MvyqH4iT/mmbjQ+WkDyEopmDRA6tb+eDqr7tJOC+QLuY250OQgoiK8HkxTEtowsto1zhMLMrrKmBx7BuT8Ldc5hAWJA1JvK0Ojb4rR3Yn9M/T9vXkJmZQzNyXjMhvOJSwYKmL8c= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744913676; c=relaxed/simple; bh=HVB8lFmKZz740Prgw296Ot9yRD09hJIoHuS5128JJrU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=fM+MNo0NCgwh6sAry2vs4pPNA7Nw2zoCPkiDHhEMF2YDr89dYuAfsDomfTsDqSl80oQDPuQIodAldRK1kwz8NlhP1tme6QQKuIm3YX7y3M+zp5Nqpxq/PPWhjdM2+j4o1hOCKQIHnAewnll9apyjoUL44fVaJ88cOHuQQF6wjRk= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b=f4vn3OMk; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b="f4vn3OMk" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2073BC4CEE4; Thu, 17 Apr 2025 18:14:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1744913676; bh=HVB8lFmKZz740Prgw296Ot9yRD09hJIoHuS5128JJrU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=f4vn3OMkSXMO0/96Ve0E4Yb+G9X7BN6IlKIhrgsJPX6/hF7nJ8qTy2EmhpYkLY3EY RGeBlVdAwSSYqzGz4qEkg+OtYqjEH2vHzPmoyrT8g0m/MF7TXi2nNlLTJnC8KxRd2U JnWXnP8B8Zu6tueSyIfqwSShyyuoWQllQ6eMI2fA= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, Haiyang Zhang , Long Li , Shradha Gupta , Jakub Kicinski Subject: [PATCH 6.14 416/449] net: mana: Switch to page pool for jumbo frames Date: Thu, 17 Apr 2025 19:51:44 +0200 Message-ID: <20250417175135.033389321@linuxfoundation.org> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250417175117.964400335@linuxfoundation.org> References: <20250417175117.964400335@linuxfoundation.org> User-Agent: quilt/0.68 X-stable: review X-Patchwork-Hint: ignore Precedence: bulk X-Mailing-List: patches@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit 6.14-stable review patch. If anyone has any objections, please let me know. ------------------ From: Haiyang Zhang commit fa37a8849634db2dd3545116873da8cf4b1e67c6 upstream. Frag allocators, such as netdev_alloc_frag(), were not designed to work for fragsz > PAGE_SIZE. So, switch to page pool for jumbo frames instead of using page frag allocators. This driver is using page pool for smaller MTUs already. Cc: stable@vger.kernel.org Fixes: 80f6215b450e ("net: mana: Add support for jumbo frame") Signed-off-by: Haiyang Zhang Reviewed-by: Long Li Reviewed-by: Shradha Gupta Link: https://patch.msgid.link/1742920357-27263-1-git-send-email-haiyangz@microsoft.com Signed-off-by: Jakub Kicinski Signed-off-by: Greg Kroah-Hartman --- drivers/net/ethernet/microsoft/mana/mana_en.c | 46 +++++--------------------- 1 file changed, 9 insertions(+), 37 deletions(-) --- a/drivers/net/ethernet/microsoft/mana/mana_en.c +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c @@ -652,30 +652,16 @@ int mana_pre_alloc_rxbufs(struct mana_po mpc->rxbpre_total = 0; for (i = 0; i < num_rxb; i++) { - if (mpc->rxbpre_alloc_size > PAGE_SIZE) { - va = netdev_alloc_frag(mpc->rxbpre_alloc_size); - if (!va) - goto error; - - page = virt_to_head_page(va); - /* Check if the frag falls back to single page */ - if (compound_order(page) < - get_order(mpc->rxbpre_alloc_size)) { - put_page(page); - goto error; - } - } else { - page = dev_alloc_page(); - if (!page) - goto error; + page = dev_alloc_pages(get_order(mpc->rxbpre_alloc_size)); + if (!page) + goto error; - va = page_to_virt(page); - } + va = page_to_virt(page); da = dma_map_single(dev, va + mpc->rxbpre_headroom, mpc->rxbpre_datasize, DMA_FROM_DEVICE); if (dma_mapping_error(dev, da)) { - put_page(virt_to_head_page(va)); + put_page(page); goto error; } @@ -1660,7 +1646,7 @@ drop: } static void *mana_get_rxfrag(struct mana_rxq *rxq, struct device *dev, - dma_addr_t *da, bool *from_pool, bool is_napi) + dma_addr_t *da, bool *from_pool) { struct page *page; void *va; @@ -1671,21 +1657,6 @@ static void *mana_get_rxfrag(struct mana if (rxq->xdp_save_va) { va = rxq->xdp_save_va; rxq->xdp_save_va = NULL; - } else if (rxq->alloc_size > PAGE_SIZE) { - if (is_napi) - va = napi_alloc_frag(rxq->alloc_size); - else - va = netdev_alloc_frag(rxq->alloc_size); - - if (!va) - return NULL; - - page = virt_to_head_page(va); - /* Check if the frag falls back to single page */ - if (compound_order(page) < get_order(rxq->alloc_size)) { - put_page(page); - return NULL; - } } else { page = page_pool_dev_alloc_pages(rxq->page_pool); if (!page) @@ -1718,7 +1689,7 @@ static void mana_refill_rx_oob(struct de dma_addr_t da; void *va; - va = mana_get_rxfrag(rxq, dev, &da, &from_pool, true); + va = mana_get_rxfrag(rxq, dev, &da, &from_pool); if (!va) return; @@ -2158,7 +2129,7 @@ static int mana_fill_rx_oob(struct mana_ if (mpc->rxbufs_pre) va = mana_get_rxbuf_pre(rxq, &da); else - va = mana_get_rxfrag(rxq, dev, &da, &from_pool, false); + va = mana_get_rxfrag(rxq, dev, &da, &from_pool); if (!va) return -ENOMEM; @@ -2244,6 +2215,7 @@ static int mana_create_page_pool(struct pprm.nid = gc->numa_node; pprm.napi = &rxq->rx_cq.napi; pprm.netdev = rxq->ndev; + pprm.order = get_order(rxq->alloc_size); rxq->page_pool = page_pool_create(&pprm);