* [PATCH net] net: mana: Switch to page pool for jumbo frames
@ 2025-03-22 1:04 Haiyang Zhang
2025-03-25 16:13 ` Jakub Kicinski
0 siblings, 1 reply; 3+ messages in thread
From: Haiyang Zhang @ 2025-03-22 1:04 UTC (permalink / raw)
To: linux-hyperv, netdev
Cc: haiyangz, decui, stephen, kys, paulros, olaf, vkuznets, davem,
wei.liu, edumazet, kuba, pabeni, leon, longli, ssengar,
linux-rdma, daniel, john.fastabend, bpf, ast, hawk, tglx,
shradhagupta, linux-kernel, stable
Since commit 8218f62c9c9b ("mm: page_frag: use initial zero offset for
page_frag_alloc_align()"), the netdev_alloc_frag() no longer works for
fragsz > PAGE_SIZE. And, this behavior is by design.
So, switch to page pool for jumbo frames instead of using page frag
allocators. This driver is using page pool for smaller MTUs already.
Cc: stable@vger.kernel.org
Fixes: 80f6215b450e ("net: mana: Add support for jumbo frame")
Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com>
---
drivers/net/ethernet/microsoft/mana/mana_en.c | 46 ++++---------------
1 file changed, 9 insertions(+), 37 deletions(-)
diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c
index 9a8171f099b6..4d41f4cca3d8 100644
--- a/drivers/net/ethernet/microsoft/mana/mana_en.c
+++ b/drivers/net/ethernet/microsoft/mana/mana_en.c
@@ -661,30 +661,16 @@ int mana_pre_alloc_rxbufs(struct mana_port_context *mpc, int new_mtu, int num_qu
mpc->rxbpre_total = 0;
for (i = 0; i < num_rxb; i++) {
- if (mpc->rxbpre_alloc_size > PAGE_SIZE) {
- va = netdev_alloc_frag(mpc->rxbpre_alloc_size);
- if (!va)
- goto error;
-
- page = virt_to_head_page(va);
- /* Check if the frag falls back to single page */
- if (compound_order(page) <
- get_order(mpc->rxbpre_alloc_size)) {
- put_page(page);
- goto error;
- }
- } else {
- page = dev_alloc_page();
- if (!page)
- goto error;
+ page = dev_alloc_pages(get_order(mpc->rxbpre_alloc_size));
+ if (!page)
+ goto error;
- va = page_to_virt(page);
- }
+ va = page_to_virt(page);
da = dma_map_single(dev, va + mpc->rxbpre_headroom,
mpc->rxbpre_datasize, DMA_FROM_DEVICE);
if (dma_mapping_error(dev, da)) {
- put_page(virt_to_head_page(va));
+ put_page(page);
goto error;
}
@@ -1672,7 +1658,7 @@ static void mana_rx_skb(void *buf_va, bool from_pool,
}
static void *mana_get_rxfrag(struct mana_rxq *rxq, struct device *dev,
- dma_addr_t *da, bool *from_pool, bool is_napi)
+ dma_addr_t *da, bool *from_pool)
{
struct page *page;
void *va;
@@ -1683,21 +1669,6 @@ static void *mana_get_rxfrag(struct mana_rxq *rxq, struct device *dev,
if (rxq->xdp_save_va) {
va = rxq->xdp_save_va;
rxq->xdp_save_va = NULL;
- } else if (rxq->alloc_size > PAGE_SIZE) {
- if (is_napi)
- va = napi_alloc_frag(rxq->alloc_size);
- else
- va = netdev_alloc_frag(rxq->alloc_size);
-
- if (!va)
- return NULL;
-
- page = virt_to_head_page(va);
- /* Check if the frag falls back to single page */
- if (compound_order(page) < get_order(rxq->alloc_size)) {
- put_page(page);
- return NULL;
- }
} else {
page = page_pool_dev_alloc_pages(rxq->page_pool);
if (!page)
@@ -1730,7 +1701,7 @@ static void mana_refill_rx_oob(struct device *dev, struct mana_rxq *rxq,
dma_addr_t da;
void *va;
- va = mana_get_rxfrag(rxq, dev, &da, &from_pool, true);
+ va = mana_get_rxfrag(rxq, dev, &da, &from_pool);
if (!va)
return;
@@ -2172,7 +2143,7 @@ static int mana_fill_rx_oob(struct mana_recv_buf_oob *rx_oob, u32 mem_key,
if (mpc->rxbufs_pre)
va = mana_get_rxbuf_pre(rxq, &da);
else
- va = mana_get_rxfrag(rxq, dev, &da, &from_pool, false);
+ va = mana_get_rxfrag(rxq, dev, &da, &from_pool);
if (!va)
return -ENOMEM;
@@ -2258,6 +2229,7 @@ static int mana_create_page_pool(struct mana_rxq *rxq, struct gdma_context *gc)
pprm.nid = gc->numa_node;
pprm.napi = &rxq->rx_cq.napi;
pprm.netdev = rxq->ndev;
+ pprm.order = get_order(rxq->alloc_size);
rxq->page_pool = page_pool_create(&pprm);
--
2.34.1
^ permalink raw reply related [flat|nested] 3+ messages in thread* Re: [PATCH net] net: mana: Switch to page pool for jumbo frames
2025-03-22 1:04 [PATCH net] net: mana: Switch to page pool for jumbo frames Haiyang Zhang
@ 2025-03-25 16:13 ` Jakub Kicinski
2025-03-25 16:19 ` [EXTERNAL] " Haiyang Zhang
0 siblings, 1 reply; 3+ messages in thread
From: Jakub Kicinski @ 2025-03-25 16:13 UTC (permalink / raw)
To: Haiyang Zhang
Cc: linux-hyperv, netdev, decui, stephen, kys, paulros, olaf,
vkuznets, davem, wei.liu, edumazet, pabeni, leon, longli, ssengar,
linux-rdma, daniel, john.fastabend, bpf, ast, hawk, tglx,
shradhagupta, linux-kernel, stable
On Fri, 21 Mar 2025 18:04:35 -0700 Haiyang Zhang wrote:
> Since commit 8218f62c9c9b ("mm: page_frag: use initial zero offset for
> page_frag_alloc_align()"), the netdev_alloc_frag() no longer works for
> fragsz > PAGE_SIZE. And, this behavior is by design.
This is inaccurate, AFAIU. The user of frag allocator with fragsz >
PAGE_SIZE has _always_ been incorrect. It's just that it fails in
a more obvious way now? Please correct the commit message.
--
pw-bot: cr
^ permalink raw reply [flat|nested] 3+ messages in thread* RE: [EXTERNAL] Re: [PATCH net] net: mana: Switch to page pool for jumbo frames
2025-03-25 16:13 ` Jakub Kicinski
@ 2025-03-25 16:19 ` Haiyang Zhang
0 siblings, 0 replies; 3+ messages in thread
From: Haiyang Zhang @ 2025-03-25 16:19 UTC (permalink / raw)
To: Jakub Kicinski
Cc: linux-hyperv@vger.kernel.org, netdev@vger.kernel.org, Dexuan Cui,
stephen@networkplumber.org, KY Srinivasan, Paul Rosswurm,
olaf@aepfle.de, vkuznets, davem@davemloft.net, wei.liu@kernel.org,
edumazet@google.com, pabeni@redhat.com, leon@kernel.org, Long Li,
ssengar@linux.microsoft.com, linux-rdma@vger.kernel.org,
daniel@iogearbox.net, john.fastabend@gmail.com,
bpf@vger.kernel.org, ast@kernel.org, hawk@kernel.org,
tglx@linutronix.de, shradhagupta@linux.microsoft.com,
linux-kernel@vger.kernel.org, stable@vger.kernel.org
> -----Original Message-----
> From: Jakub Kicinski <kuba@kernel.org>
> Sent: Tuesday, March 25, 2025 12:14 PM
> To: Haiyang Zhang <haiyangz@microsoft.com>
> Cc: linux-hyperv@vger.kernel.org; netdev@vger.kernel.org; Dexuan Cui
> <decui@microsoft.com>; stephen@networkplumber.org; KY Srinivasan
> <kys@microsoft.com>; Paul Rosswurm <paulros@microsoft.com>;
> olaf@aepfle.de; vkuznets <vkuznets@redhat.com>; davem@davemloft.net;
> wei.liu@kernel.org; edumazet@google.com; pabeni@redhat.com;
> leon@kernel.org; Long Li <longli@microsoft.com>;
> ssengar@linux.microsoft.com; linux-rdma@vger.kernel.org;
> daniel@iogearbox.net; john.fastabend@gmail.com; bpf@vger.kernel.org;
> ast@kernel.org; hawk@kernel.org; tglx@linutronix.de;
> shradhagupta@linux.microsoft.com; linux-kernel@vger.kernel.org;
> stable@vger.kernel.org
> Subject: [EXTERNAL] Re: [PATCH net] net: mana: Switch to page pool for
> jumbo frames
>
> On Fri, 21 Mar 2025 18:04:35 -0700 Haiyang Zhang wrote:
> > Since commit 8218f62c9c9b ("mm: page_frag: use initial zero offset for
> > page_frag_alloc_align()"), the netdev_alloc_frag() no longer works for
> > fragsz > PAGE_SIZE. And, this behavior is by design.
>
> This is inaccurate, AFAIU. The user of frag allocator with fragsz >
> PAGE_SIZE has _always_ been incorrect. It's just that it fails in
> a more obvious way now? Please correct the commit message.
Yes. I will correct the msg.
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2025-03-25 16:19 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-03-22 1:04 [PATCH net] net: mana: Switch to page pool for jumbo frames Haiyang Zhang
2025-03-25 16:13 ` Jakub Kicinski
2025-03-25 16:19 ` [EXTERNAL] " Haiyang Zhang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).