* [PATCH net,v2] net: mana: Switch to page pool for jumbo frames
@ 2025-03-25 16:32 Haiyang Zhang
2025-03-25 17:06 ` Long Li
` (2 more replies)
0 siblings, 3 replies; 6+ messages in thread
From: Haiyang Zhang @ 2025-03-25 16:32 UTC (permalink / raw)
To: linux-hyperv, netdev
Cc: haiyangz, decui, stephen, kys, paulros, olaf, vkuznets, davem,
wei.liu, edumazet, kuba, pabeni, leon, longli, ssengar,
linux-rdma, daniel, john.fastabend, bpf, ast, hawk, tglx,
shradhagupta, jesse.brandeburg, andrew+netdev, linux-kernel,
stable
Frag allocators, such as netdev_alloc_frag(), were not designed to
work for fragsz > PAGE_SIZE.
So, switch to page pool for jumbo frames instead of using page frag
allocators. This driver is using page pool for smaller MTUs already.
Cc: stable@vger.kernel.org
Fixes: 80f6215b450e ("net: mana: Add support for jumbo frame")
Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com>
---
v2: updated the commit msg as suggested by Jakub Kicinski.
---
drivers/net/ethernet/microsoft/mana/mana_en.c | 46 ++++---------------
1 file changed, 9 insertions(+), 37 deletions(-)
diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c
index 9a8171f099b6..4d41f4cca3d8 100644
--- a/drivers/net/ethernet/microsoft/mana/mana_en.c
+++ b/drivers/net/ethernet/microsoft/mana/mana_en.c
@@ -661,30 +661,16 @@ int mana_pre_alloc_rxbufs(struct mana_port_context *mpc, int new_mtu, int num_qu
mpc->rxbpre_total = 0;
for (i = 0; i < num_rxb; i++) {
- if (mpc->rxbpre_alloc_size > PAGE_SIZE) {
- va = netdev_alloc_frag(mpc->rxbpre_alloc_size);
- if (!va)
- goto error;
-
- page = virt_to_head_page(va);
- /* Check if the frag falls back to single page */
- if (compound_order(page) <
- get_order(mpc->rxbpre_alloc_size)) {
- put_page(page);
- goto error;
- }
- } else {
- page = dev_alloc_page();
- if (!page)
- goto error;
+ page = dev_alloc_pages(get_order(mpc->rxbpre_alloc_size));
+ if (!page)
+ goto error;
- va = page_to_virt(page);
- }
+ va = page_to_virt(page);
da = dma_map_single(dev, va + mpc->rxbpre_headroom,
mpc->rxbpre_datasize, DMA_FROM_DEVICE);
if (dma_mapping_error(dev, da)) {
- put_page(virt_to_head_page(va));
+ put_page(page);
goto error;
}
@@ -1672,7 +1658,7 @@ static void mana_rx_skb(void *buf_va, bool from_pool,
}
static void *mana_get_rxfrag(struct mana_rxq *rxq, struct device *dev,
- dma_addr_t *da, bool *from_pool, bool is_napi)
+ dma_addr_t *da, bool *from_pool)
{
struct page *page;
void *va;
@@ -1683,21 +1669,6 @@ static void *mana_get_rxfrag(struct mana_rxq *rxq, struct device *dev,
if (rxq->xdp_save_va) {
va = rxq->xdp_save_va;
rxq->xdp_save_va = NULL;
- } else if (rxq->alloc_size > PAGE_SIZE) {
- if (is_napi)
- va = napi_alloc_frag(rxq->alloc_size);
- else
- va = netdev_alloc_frag(rxq->alloc_size);
-
- if (!va)
- return NULL;
-
- page = virt_to_head_page(va);
- /* Check if the frag falls back to single page */
- if (compound_order(page) < get_order(rxq->alloc_size)) {
- put_page(page);
- return NULL;
- }
} else {
page = page_pool_dev_alloc_pages(rxq->page_pool);
if (!page)
@@ -1730,7 +1701,7 @@ static void mana_refill_rx_oob(struct device *dev, struct mana_rxq *rxq,
dma_addr_t da;
void *va;
- va = mana_get_rxfrag(rxq, dev, &da, &from_pool, true);
+ va = mana_get_rxfrag(rxq, dev, &da, &from_pool);
if (!va)
return;
@@ -2172,7 +2143,7 @@ static int mana_fill_rx_oob(struct mana_recv_buf_oob *rx_oob, u32 mem_key,
if (mpc->rxbufs_pre)
va = mana_get_rxbuf_pre(rxq, &da);
else
- va = mana_get_rxfrag(rxq, dev, &da, &from_pool, false);
+ va = mana_get_rxfrag(rxq, dev, &da, &from_pool);
if (!va)
return -ENOMEM;
@@ -2258,6 +2229,7 @@ static int mana_create_page_pool(struct mana_rxq *rxq, struct gdma_context *gc)
pprm.nid = gc->numa_node;
pprm.napi = &rxq->rx_cq.napi;
pprm.netdev = rxq->ndev;
+ pprm.order = get_order(rxq->alloc_size);
rxq->page_pool = page_pool_create(&pprm);
--
2.34.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* RE: [PATCH net,v2] net: mana: Switch to page pool for jumbo frames
2025-03-25 16:32 [PATCH net,v2] net: mana: Switch to page pool for jumbo frames Haiyang Zhang
@ 2025-03-25 17:06 ` Long Li
2025-03-25 18:11 ` Haiyang Zhang
2025-03-27 5:01 ` Shradha Gupta
2025-03-28 13:50 ` patchwork-bot+netdevbpf
2 siblings, 1 reply; 6+ messages in thread
From: Long Li @ 2025-03-25 17:06 UTC (permalink / raw)
To: Haiyang Zhang, linux-hyperv@vger.kernel.org,
netdev@vger.kernel.org
Cc: Dexuan Cui, stephen@networkplumber.org, KY Srinivasan,
Paul Rosswurm, olaf@aepfle.de, vkuznets, davem@davemloft.net,
wei.liu@kernel.org, edumazet@google.com, kuba@kernel.org,
pabeni@redhat.com, leon@kernel.org, ssengar@linux.microsoft.com,
linux-rdma@vger.kernel.org, daniel@iogearbox.net,
john.fastabend@gmail.com, bpf@vger.kernel.org, ast@kernel.org,
hawk@kernel.org, tglx@linutronix.de,
shradhagupta@linux.microsoft.com, jesse.brandeburg@intel.com,
andrew+netdev@lunn.ch, linux-kernel@vger.kernel.org,
stable@vger.kernel.org
> -----Original Message-----
> From: LKML haiyangz <lkmlhyz@microsoft.com> On Behalf Of Haiyang Zhang
> Sent: Tuesday, March 25, 2025 9:33 AM
> To: linux-hyperv@vger.kernel.org; netdev@vger.kernel.org
> Cc: Haiyang Zhang <haiyangz@microsoft.com>; Dexuan Cui
> <decui@microsoft.com>; stephen@networkplumber.org; KY Srinivasan
> <kys@microsoft.com>; Paul Rosswurm <paulros@microsoft.com>;
> olaf@aepfle.de; vkuznets <vkuznets@redhat.com>; davem@davemloft.net;
> wei.liu@kernel.org; edumazet@google.com; kuba@kernel.org;
> pabeni@redhat.com; leon@kernel.org; Long Li <longli@microsoft.com>;
> ssengar@linux.microsoft.com; linux-rdma@vger.kernel.org;
> daniel@iogearbox.net; john.fastabend@gmail.com; bpf@vger.kernel.org;
> ast@kernel.org; hawk@kernel.org; tglx@linutronix.de;
> shradhagupta@linux.microsoft.com; jesse.brandeburg@intel.com;
> andrew+netdev@lunn.ch; linux-kernel@vger.kernel.org; stable@vger.kernel.org
> Subject: [PATCH net,v2] net: mana: Switch to page pool for jumbo frames
>
> Frag allocators, such as netdev_alloc_frag(), were not designed to work for
> fragsz > PAGE_SIZE.
>
> So, switch to page pool for jumbo frames instead of using page frag allocators.
> This driver is using page pool for smaller MTUs already.
>
> Cc: stable@vger.kernel.org
> Fixes: 80f6215b450e ("net: mana: Add support for jumbo frame")
> Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com>
> ---
> v2: updated the commit msg as suggested by Jakub Kicinski.
>
> ---
> drivers/net/ethernet/microsoft/mana/mana_en.c | 46 ++++---------------
> 1 file changed, 9 insertions(+), 37 deletions(-)
>
> diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c
> b/drivers/net/ethernet/microsoft/mana/mana_en.c
> index 9a8171f099b6..4d41f4cca3d8 100644
> --- a/drivers/net/ethernet/microsoft/mana/mana_en.c
> +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c
> @@ -661,30 +661,16 @@ int mana_pre_alloc_rxbufs(struct mana_port_context
> *mpc, int new_mtu, int num_qu
> mpc->rxbpre_total = 0;
>
> for (i = 0; i < num_rxb; i++) {
> - if (mpc->rxbpre_alloc_size > PAGE_SIZE) {
> - va = netdev_alloc_frag(mpc->rxbpre_alloc_size);
> - if (!va)
> - goto error;
> -
> - page = virt_to_head_page(va);
> - /* Check if the frag falls back to single page */
> - if (compound_order(page) <
> - get_order(mpc->rxbpre_alloc_size)) {
> - put_page(page);
> - goto error;
> - }
> - } else {
> - page = dev_alloc_page();
> - if (!page)
> - goto error;
> + page = dev_alloc_pages(get_order(mpc->rxbpre_alloc_size));
> + if (!page)
> + goto error;
>
> - va = page_to_virt(page);
> - }
> + va = page_to_virt(page);
>
> da = dma_map_single(dev, va + mpc->rxbpre_headroom,
> mpc->rxbpre_datasize, DMA_FROM_DEVICE);
> if (dma_mapping_error(dev, da)) {
> - put_page(virt_to_head_page(va));
> + put_page(page);
Should we use __free_pages()?
^ permalink raw reply [flat|nested] 6+ messages in thread
* RE: [PATCH net,v2] net: mana: Switch to page pool for jumbo frames
2025-03-25 17:06 ` Long Li
@ 2025-03-25 18:11 ` Haiyang Zhang
2025-03-25 18:32 ` Long Li
0 siblings, 1 reply; 6+ messages in thread
From: Haiyang Zhang @ 2025-03-25 18:11 UTC (permalink / raw)
To: Long Li, linux-hyperv@vger.kernel.org, netdev@vger.kernel.org
Cc: Dexuan Cui, stephen@networkplumber.org, KY Srinivasan,
Paul Rosswurm, olaf@aepfle.de, vkuznets, davem@davemloft.net,
wei.liu@kernel.org, edumazet@google.com, kuba@kernel.org,
pabeni@redhat.com, leon@kernel.org, ssengar@linux.microsoft.com,
linux-rdma@vger.kernel.org, daniel@iogearbox.net,
john.fastabend@gmail.com, bpf@vger.kernel.org, ast@kernel.org,
hawk@kernel.org, tglx@linutronix.de,
shradhagupta@linux.microsoft.com, jesse.brandeburg@intel.com,
andrew+netdev@lunn.ch, linux-kernel@vger.kernel.org,
stable@vger.kernel.org
> -----Original Message-----
> From: Long Li <longli@microsoft.com>
> Sent: Tuesday, March 25, 2025 1:06 PM
> To: Haiyang Zhang <haiyangz@microsoft.com>; linux-hyperv@vger.kernel.org;
> netdev@vger.kernel.org
> Cc: Dexuan Cui <decui@microsoft.com>; stephen@networkplumber.org; KY
> Srinivasan <kys@microsoft.com>; Paul Rosswurm <paulros@microsoft.com>;
> olaf@aepfle.de; vkuznets <vkuznets@redhat.com>; davem@davemloft.net;
> wei.liu@kernel.org; edumazet@google.com; kuba@kernel.org;
> pabeni@redhat.com; leon@kernel.org; ssengar@linux.microsoft.com; linux-
> rdma@vger.kernel.org; daniel@iogearbox.net; john.fastabend@gmail.com;
> bpf@vger.kernel.org; ast@kernel.org; hawk@kernel.org; tglx@linutronix.de;
> shradhagupta@linux.microsoft.com; jesse.brandeburg@intel.com;
> andrew+netdev@lunn.ch; linux-kernel@vger.kernel.org;
> stable@vger.kernel.org
> Subject: RE: [PATCH net,v2] net: mana: Switch to page pool for jumbo
> frames
>
>
>
> > -----Original Message-----
> > From: LKML haiyangz <lkmlhyz@microsoft.com> On Behalf Of Haiyang Zhang
> > Sent: Tuesday, March 25, 2025 9:33 AM
> > To: linux-hyperv@vger.kernel.org; netdev@vger.kernel.org
> > Cc: Haiyang Zhang <haiyangz@microsoft.com>; Dexuan Cui
> > <decui@microsoft.com>; stephen@networkplumber.org; KY Srinivasan
> > <kys@microsoft.com>; Paul Rosswurm <paulros@microsoft.com>;
> > olaf@aepfle.de; vkuznets <vkuznets@redhat.com>; davem@davemloft.net;
> > wei.liu@kernel.org; edumazet@google.com; kuba@kernel.org;
> > pabeni@redhat.com; leon@kernel.org; Long Li <longli@microsoft.com>;
> > ssengar@linux.microsoft.com; linux-rdma@vger.kernel.org;
> > daniel@iogearbox.net; john.fastabend@gmail.com; bpf@vger.kernel.org;
> > ast@kernel.org; hawk@kernel.org; tglx@linutronix.de;
> > shradhagupta@linux.microsoft.com; jesse.brandeburg@intel.com;
> > andrew+netdev@lunn.ch; linux-kernel@vger.kernel.org;
> stable@vger.kernel.org
> > Subject: [PATCH net,v2] net: mana: Switch to page pool for jumbo frames
> >
> > Frag allocators, such as netdev_alloc_frag(), were not designed to work
> for
> > fragsz > PAGE_SIZE.
> >
> > So, switch to page pool for jumbo frames instead of using page frag
> allocators.
> > This driver is using page pool for smaller MTUs already.
> >
> > Cc: stable@vger.kernel.org
> > Fixes: 80f6215b450e ("net: mana: Add support for jumbo frame")
> > Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com>
> > ---
> > v2: updated the commit msg as suggested by Jakub Kicinski.
> >
> > ---
> > drivers/net/ethernet/microsoft/mana/mana_en.c | 46 ++++---------------
> > 1 file changed, 9 insertions(+), 37 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c
> > b/drivers/net/ethernet/microsoft/mana/mana_en.c
> > index 9a8171f099b6..4d41f4cca3d8 100644
> > --- a/drivers/net/ethernet/microsoft/mana/mana_en.c
> > +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c
> > @@ -661,30 +661,16 @@ int mana_pre_alloc_rxbufs(struct mana_port_context
> > *mpc, int new_mtu, int num_qu
> > mpc->rxbpre_total = 0;
> >
> > for (i = 0; i < num_rxb; i++) {
> > - if (mpc->rxbpre_alloc_size > PAGE_SIZE) {
> > - va = netdev_alloc_frag(mpc->rxbpre_alloc_size);
> > - if (!va)
> > - goto error;
> > -
> > - page = virt_to_head_page(va);
> > - /* Check if the frag falls back to single page */
> > - if (compound_order(page) <
> > - get_order(mpc->rxbpre_alloc_size)) {
> > - put_page(page);
> > - goto error;
> > - }
> > - } else {
> > - page = dev_alloc_page();
> > - if (!page)
> > - goto error;
> > + page = dev_alloc_pages(get_order(mpc->rxbpre_alloc_size));
> > + if (!page)
> > + goto error;
> >
> > - va = page_to_virt(page);
> > - }
> > + va = page_to_virt(page);
> >
> > da = dma_map_single(dev, va + mpc->rxbpre_headroom,
> > mpc->rxbpre_datasize, DMA_FROM_DEVICE);
> > if (dma_mapping_error(dev, da)) {
> > - put_page(virt_to_head_page(va));
> > + put_page(page);
>
> Should we use __free_pages()?
Quote from doc: https://www.kernel.org/doc/html/next/core-api/mm-api.html
___free_pages():
"This function can free multi-page allocations that are not compound pages."
"If you want to use the page's reference count to decide when to free the
allocation, you should allocate a compound page, and use put_page() instead
of __free_pages()."
And, since dev_alloc_pages returns compound page for high order page, we
use put_page() which works for both compound & single page.
Thanks,
- Haiyang
^ permalink raw reply [flat|nested] 6+ messages in thread
* RE: [PATCH net,v2] net: mana: Switch to page pool for jumbo frames
2025-03-25 18:11 ` Haiyang Zhang
@ 2025-03-25 18:32 ` Long Li
0 siblings, 0 replies; 6+ messages in thread
From: Long Li @ 2025-03-25 18:32 UTC (permalink / raw)
To: Haiyang Zhang, linux-hyperv@vger.kernel.org,
netdev@vger.kernel.org
Cc: Dexuan Cui, stephen@networkplumber.org, KY Srinivasan,
Paul Rosswurm, olaf@aepfle.de, vkuznets, davem@davemloft.net,
wei.liu@kernel.org, edumazet@google.com, kuba@kernel.org,
pabeni@redhat.com, leon@kernel.org, ssengar@linux.microsoft.com,
linux-rdma@vger.kernel.org, daniel@iogearbox.net,
john.fastabend@gmail.com, bpf@vger.kernel.org, ast@kernel.org,
hawk@kernel.org, tglx@linutronix.de,
shradhagupta@linux.microsoft.com, jesse.brandeburg@intel.com,
andrew+netdev@lunn.ch, linux-kernel@vger.kernel.org,
stable@vger.kernel.org
> > > Subject: [PATCH net,v2] net: mana: Switch to page pool for jumbo
> > > frames
> > >
> > > Frag allocators, such as netdev_alloc_frag(), were not designed to
> > > work
> > for
> > > fragsz > PAGE_SIZE.
> > >
> > > So, switch to page pool for jumbo frames instead of using page frag
> > allocators.
> > > This driver is using page pool for smaller MTUs already.
> > >
> > > Cc: stable@vger.kernel.org
> > > Fixes: 80f6215b450e ("net: mana: Add support for jumbo frame")
> > > Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com>
Reviewed-by: Long Li <longli@microsoft.com>
> > > ---
> > > v2: updated the commit msg as suggested by Jakub Kicinski.
> > >
> > > ---
> > > drivers/net/ethernet/microsoft/mana/mana_en.c | 46
> > > ++++---------------
> > > 1 file changed, 9 insertions(+), 37 deletions(-)
> > >
> > > diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c
> > > b/drivers/net/ethernet/microsoft/mana/mana_en.c
> > > index 9a8171f099b6..4d41f4cca3d8 100644
> > > --- a/drivers/net/ethernet/microsoft/mana/mana_en.c
> > > +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c
> > > @@ -661,30 +661,16 @@ int mana_pre_alloc_rxbufs(struct
> > > mana_port_context *mpc, int new_mtu, int num_qu
> > > mpc->rxbpre_total = 0;
> > >
> > > for (i = 0; i < num_rxb; i++) {
> > > - if (mpc->rxbpre_alloc_size > PAGE_SIZE) {
> > > - va = netdev_alloc_frag(mpc->rxbpre_alloc_size);
> > > - if (!va)
> > > - goto error;
> > > -
> > > - page = virt_to_head_page(va);
> > > - /* Check if the frag falls back to single page */
> > > - if (compound_order(page) <
> > > - get_order(mpc->rxbpre_alloc_size)) {
> > > - put_page(page);
> > > - goto error;
> > > - }
> > > - } else {
> > > - page = dev_alloc_page();
> > > - if (!page)
> > > - goto error;
> > > + page = dev_alloc_pages(get_order(mpc->rxbpre_alloc_size));
> > > + if (!page)
> > > + goto error;
> > >
> > > - va = page_to_virt(page);
> > > - }
> > > + va = page_to_virt(page);
> > >
> > > da = dma_map_single(dev, va + mpc->rxbpre_headroom,
> > > mpc->rxbpre_datasize, DMA_FROM_DEVICE);
> > > if (dma_mapping_error(dev, da)) {
> > > - put_page(virt_to_head_page(va));
> > > + put_page(page);
> >
> > Should we use __free_pages()?
>
> Quote from doc:
> https://www.ker/
> nel.org%2Fdoc%2Fhtml%2Fnext%2Fcore-api%2Fmm-
> api.html&data=05%7C02%7Clongli%40microsoft.com%7Cada2b7bad76e4ab7286
> 508dd6bc87430%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638785
> 230869082534%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIl
> YiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C
> 0%7C%7C%7C&sdata=VINKfrv80MzhE1mmibv1RrRz4WCmr%2BZhWDf1ZaOv47
> w%3D&reserved=0
> ___free_pages():
> "This function can free multi-page allocations that are not compound pages."
> "If you want to use the page's reference count to decide when to free the
> allocation, you should allocate a compound page, and use put_page() instead of
> __free_pages()."
>
> And, since dev_alloc_pages returns compound page for high order page, we use
> put_page() which works for both compound & single page.
>
> Thanks,
> - Haiyang
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH net,v2] net: mana: Switch to page pool for jumbo frames
2025-03-25 16:32 [PATCH net,v2] net: mana: Switch to page pool for jumbo frames Haiyang Zhang
2025-03-25 17:06 ` Long Li
@ 2025-03-27 5:01 ` Shradha Gupta
2025-03-28 13:50 ` patchwork-bot+netdevbpf
2 siblings, 0 replies; 6+ messages in thread
From: Shradha Gupta @ 2025-03-27 5:01 UTC (permalink / raw)
To: Haiyang Zhang
Cc: linux-hyperv, netdev, decui, stephen, kys, paulros, olaf,
vkuznets, davem, wei.liu, edumazet, kuba, pabeni, leon, longli,
ssengar, linux-rdma, daniel, john.fastabend, bpf, ast, hawk, tglx,
jesse.brandeburg, andrew+netdev, linux-kernel, stable
On Tue, Mar 25, 2025 at 09:32:37AM -0700, Haiyang Zhang wrote:
> Frag allocators, such as netdev_alloc_frag(), were not designed to
> work for fragsz > PAGE_SIZE.
>
> So, switch to page pool for jumbo frames instead of using page frag
> allocators. This driver is using page pool for smaller MTUs already.
>
> Cc: stable@vger.kernel.org
> Fixes: 80f6215b450e ("net: mana: Add support for jumbo frame")
> Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com>
> ---
> v2: updated the commit msg as suggested by Jakub Kicinski.
>
> ---
> drivers/net/ethernet/microsoft/mana/mana_en.c | 46 ++++---------------
> 1 file changed, 9 insertions(+), 37 deletions(-)
>
> diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c
> index 9a8171f099b6..4d41f4cca3d8 100644
> --- a/drivers/net/ethernet/microsoft/mana/mana_en.c
> +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c
> @@ -661,30 +661,16 @@ int mana_pre_alloc_rxbufs(struct mana_port_context *mpc, int new_mtu, int num_qu
> mpc->rxbpre_total = 0;
>
> for (i = 0; i < num_rxb; i++) {
> - if (mpc->rxbpre_alloc_size > PAGE_SIZE) {
> - va = netdev_alloc_frag(mpc->rxbpre_alloc_size);
> - if (!va)
> - goto error;
> -
> - page = virt_to_head_page(va);
> - /* Check if the frag falls back to single page */
> - if (compound_order(page) <
> - get_order(mpc->rxbpre_alloc_size)) {
> - put_page(page);
> - goto error;
> - }
> - } else {
> - page = dev_alloc_page();
> - if (!page)
> - goto error;
> + page = dev_alloc_pages(get_order(mpc->rxbpre_alloc_size));
> + if (!page)
> + goto error;
>
> - va = page_to_virt(page);
> - }
> + va = page_to_virt(page);
>
> da = dma_map_single(dev, va + mpc->rxbpre_headroom,
> mpc->rxbpre_datasize, DMA_FROM_DEVICE);
> if (dma_mapping_error(dev, da)) {
> - put_page(virt_to_head_page(va));
> + put_page(page);
> goto error;
> }
>
> @@ -1672,7 +1658,7 @@ static void mana_rx_skb(void *buf_va, bool from_pool,
> }
>
> static void *mana_get_rxfrag(struct mana_rxq *rxq, struct device *dev,
> - dma_addr_t *da, bool *from_pool, bool is_napi)
> + dma_addr_t *da, bool *from_pool)
> {
> struct page *page;
> void *va;
> @@ -1683,21 +1669,6 @@ static void *mana_get_rxfrag(struct mana_rxq *rxq, struct device *dev,
> if (rxq->xdp_save_va) {
> va = rxq->xdp_save_va;
> rxq->xdp_save_va = NULL;
> - } else if (rxq->alloc_size > PAGE_SIZE) {
> - if (is_napi)
> - va = napi_alloc_frag(rxq->alloc_size);
> - else
> - va = netdev_alloc_frag(rxq->alloc_size);
> -
> - if (!va)
> - return NULL;
> -
> - page = virt_to_head_page(va);
> - /* Check if the frag falls back to single page */
> - if (compound_order(page) < get_order(rxq->alloc_size)) {
> - put_page(page);
> - return NULL;
> - }
> } else {
> page = page_pool_dev_alloc_pages(rxq->page_pool);
> if (!page)
> @@ -1730,7 +1701,7 @@ static void mana_refill_rx_oob(struct device *dev, struct mana_rxq *rxq,
> dma_addr_t da;
> void *va;
>
> - va = mana_get_rxfrag(rxq, dev, &da, &from_pool, true);
> + va = mana_get_rxfrag(rxq, dev, &da, &from_pool);
> if (!va)
> return;
>
> @@ -2172,7 +2143,7 @@ static int mana_fill_rx_oob(struct mana_recv_buf_oob *rx_oob, u32 mem_key,
> if (mpc->rxbufs_pre)
> va = mana_get_rxbuf_pre(rxq, &da);
> else
> - va = mana_get_rxfrag(rxq, dev, &da, &from_pool, false);
> + va = mana_get_rxfrag(rxq, dev, &da, &from_pool);
>
> if (!va)
> return -ENOMEM;
> @@ -2258,6 +2229,7 @@ static int mana_create_page_pool(struct mana_rxq *rxq, struct gdma_context *gc)
> pprm.nid = gc->numa_node;
> pprm.napi = &rxq->rx_cq.napi;
> pprm.netdev = rxq->ndev;
> + pprm.order = get_order(rxq->alloc_size);
>
> rxq->page_pool = page_pool_create(&pprm);
>
> --
Reviewed-by: Shradha Gupta <shradhagupta@linux.microsoft.com>
> 2.34.1
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH net,v2] net: mana: Switch to page pool for jumbo frames
2025-03-25 16:32 [PATCH net,v2] net: mana: Switch to page pool for jumbo frames Haiyang Zhang
2025-03-25 17:06 ` Long Li
2025-03-27 5:01 ` Shradha Gupta
@ 2025-03-28 13:50 ` patchwork-bot+netdevbpf
2 siblings, 0 replies; 6+ messages in thread
From: patchwork-bot+netdevbpf @ 2025-03-28 13:50 UTC (permalink / raw)
To: Haiyang Zhang
Cc: linux-hyperv, netdev, decui, stephen, kys, paulros, olaf,
vkuznets, davem, wei.liu, edumazet, kuba, pabeni, leon, longli,
ssengar, linux-rdma, daniel, john.fastabend, bpf, ast, hawk, tglx,
shradhagupta, jesse.brandeburg, andrew+netdev, linux-kernel,
stable
Hello:
This patch was applied to netdev/net.git (main)
by Jakub Kicinski <kuba@kernel.org>:
On Tue, 25 Mar 2025 09:32:37 -0700 you wrote:
> Frag allocators, such as netdev_alloc_frag(), were not designed to
> work for fragsz > PAGE_SIZE.
>
> So, switch to page pool for jumbo frames instead of using page frag
> allocators. This driver is using page pool for smaller MTUs already.
>
> Cc: stable@vger.kernel.org
> Fixes: 80f6215b450e ("net: mana: Add support for jumbo frame")
> Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com>
>
> [...]
Here is the summary with links:
- [net,v2] net: mana: Switch to page pool for jumbo frames
https://git.kernel.org/netdev/net/c/fa37a8849634
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2025-03-28 13:50 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-03-25 16:32 [PATCH net,v2] net: mana: Switch to page pool for jumbo frames Haiyang Zhang
2025-03-25 17:06 ` Long Li
2025-03-25 18:11 ` Haiyang Zhang
2025-03-25 18:32 ` Long Li
2025-03-27 5:01 ` Shradha Gupta
2025-03-28 13:50 ` patchwork-bot+netdevbpf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).