netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next 0/2] net: page_pool: Expose size limit
@ 2025-09-22  9:18 Tariq Toukan
  2025-09-22  9:18 ` [PATCH net-next 1/2] net: page_pool: Expose internal limit Tariq Toukan
                   ` (2 more replies)
  0 siblings, 3 replies; 12+ messages in thread
From: Tariq Toukan @ 2025-09-22  9:18 UTC (permalink / raw)
  To: Eric Dumazet, Jakub Kicinski, Paolo Abeni, Andrew Lunn,
	David S. Miller
  Cc: Saeed Mahameed, Tariq Toukan, Mark Bloch, Leon Romanovsky,
	Jesper Dangaard Brouer, Ilias Apalodimas, netdev, linux-rdma,
	linux-kernel, Gal Pressman, Dragos Tatulea

Hi,

This small series by Dragos has two patches.

Patch #1 exposes the page_pool internal size limit so that drivers can
check against it before creating a page_pool.

Patch #2 adds usage of the exposed limit in mlx5e driver.

Regards,
Tariq

Dragos Tatulea (2):
  net: page_pool: Expose internal limit
  net/mlx5e: Clamp page_pool size to max

 drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 2 ++
 include/net/page_pool/types.h                     | 2 ++
 net/core/page_pool.c                              | 2 +-
 3 files changed, 5 insertions(+), 1 deletion(-)


base-commit: 312e6f7676e63bbb9b81e5c68e580a9f776cc6f0
-- 
2.31.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH net-next 1/2] net: page_pool: Expose internal limit
  2025-09-22  9:18 [PATCH net-next 0/2] net: page_pool: Expose size limit Tariq Toukan
@ 2025-09-22  9:18 ` Tariq Toukan
  2025-10-24  9:11   ` Ilias Apalodimas
  2025-09-22  9:18 ` [PATCH net-next 2/2] net/mlx5e: Clamp page_pool size to max Tariq Toukan
  2025-09-22 10:04 ` [PATCH net-next 0/2] net: page_pool: Expose size limit Dawid Osuchowski
  2 siblings, 1 reply; 12+ messages in thread
From: Tariq Toukan @ 2025-09-22  9:18 UTC (permalink / raw)
  To: Eric Dumazet, Jakub Kicinski, Paolo Abeni, Andrew Lunn,
	David S. Miller
  Cc: Saeed Mahameed, Tariq Toukan, Mark Bloch, Leon Romanovsky,
	Jesper Dangaard Brouer, Ilias Apalodimas, netdev, linux-rdma,
	linux-kernel, Gal Pressman, Dragos Tatulea

From: Dragos Tatulea <dtatulea@nvidia.com>

page_pool_init() has a check for pool_size < 32K. But page_pool users
have no access to this limit so there is no way to trim the pool_size in
advance. The E2BIG error doesn't help much for retry as the driver has
to guess the next size and retry.

This patch exposes this limit to in the page_pool header.

Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com>
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
---
 include/net/page_pool/types.h | 2 ++
 net/core/page_pool.c          | 2 +-
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h
index 1509a536cb85..22aee9a65a26 100644
--- a/include/net/page_pool/types.h
+++ b/include/net/page_pool/types.h
@@ -163,6 +163,8 @@ struct pp_memory_provider_params {
 	const struct memory_provider_ops *mp_ops;
 };
 
+#define PAGE_POOL_SIZE_LIMIT 32768
+
 struct page_pool {
 	struct page_pool_params_fast p;
 
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index 36a98f2bcac3..1f0fdfb02f08 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -214,7 +214,7 @@ static int page_pool_init(struct page_pool *pool,
 		ring_qsize = pool->p.pool_size;
 
 	/* Sanity limit mem that can be pinned down */
-	if (ring_qsize > 32768)
+	if (ring_qsize > PAGE_POOL_SIZE_LIMIT)
 		return -E2BIG;
 
 	/* DMA direction is either DMA_FROM_DEVICE or DMA_BIDIRECTIONAL.
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH net-next 2/2] net/mlx5e: Clamp page_pool size to max
  2025-09-22  9:18 [PATCH net-next 0/2] net: page_pool: Expose size limit Tariq Toukan
  2025-09-22  9:18 ` [PATCH net-next 1/2] net: page_pool: Expose internal limit Tariq Toukan
@ 2025-09-22  9:18 ` Tariq Toukan
  2025-09-23 13:10   ` Simon Horman
  2025-09-23 14:23   ` Jakub Kicinski
  2025-09-22 10:04 ` [PATCH net-next 0/2] net: page_pool: Expose size limit Dawid Osuchowski
  2 siblings, 2 replies; 12+ messages in thread
From: Tariq Toukan @ 2025-09-22  9:18 UTC (permalink / raw)
  To: Eric Dumazet, Jakub Kicinski, Paolo Abeni, Andrew Lunn,
	David S. Miller
  Cc: Saeed Mahameed, Tariq Toukan, Mark Bloch, Leon Romanovsky,
	Jesper Dangaard Brouer, Ilias Apalodimas, netdev, linux-rdma,
	linux-kernel, Gal Pressman, Dragos Tatulea

From: Dragos Tatulea <dtatulea@nvidia.com>

When the user configures a large ring size (8K) and a large MTU (9000)
in HW-GRO mode, the queue will fail to allocate due to the size of the
page_pool going above the limit.

This change clamps the pool_size to the limit.

Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com>
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
---
 drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index 5e007bb3bad1..e56052895776 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -989,6 +989,8 @@ static int mlx5e_alloc_rq(struct mlx5e_params *params,
 		/* Create a page_pool and register it with rxq */
 		struct page_pool_params pp_params = { 0 };
 
+		pool_size = min_t(u32, pool_size, PAGE_POOL_SIZE_LIMIT);
+
 		pp_params.order     = 0;
 		pp_params.flags     = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV;
 		pp_params.pool_size = pool_size;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH net-next 0/2] net: page_pool: Expose size limit
  2025-09-22  9:18 [PATCH net-next 0/2] net: page_pool: Expose size limit Tariq Toukan
  2025-09-22  9:18 ` [PATCH net-next 1/2] net: page_pool: Expose internal limit Tariq Toukan
  2025-09-22  9:18 ` [PATCH net-next 2/2] net/mlx5e: Clamp page_pool size to max Tariq Toukan
@ 2025-09-22 10:04 ` Dawid Osuchowski
  2 siblings, 0 replies; 12+ messages in thread
From: Dawid Osuchowski @ 2025-09-22 10:04 UTC (permalink / raw)
  To: Tariq Toukan, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Andrew Lunn, David S. Miller
  Cc: Saeed Mahameed, Mark Bloch, Leon Romanovsky,
	Jesper Dangaard Brouer, Ilias Apalodimas, netdev, linux-rdma,
	linux-kernel, Gal Pressman, Dragos Tatulea

On 2025-09-22 11:18 AM, Tariq Toukan wrote:
> Hi,
> 
> This small series by Dragos has two patches.
> 
> Patch #1 exposes the page_pool internal size limit so that drivers can
> check against it before creating a page_pool.
> 
> Patch #2 adds usage of the exposed limit in mlx5e driver.
> 
> Regards,
> Tariq
> 
> Dragos Tatulea (2):
>    net: page_pool: Expose internal limit
>    net/mlx5e: Clamp page_pool size to max
> 
>   drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 2 ++
>   include/net/page_pool/types.h                     | 2 ++
>   net/core/page_pool.c                              | 2 +-
>   3 files changed, 5 insertions(+), 1 deletion(-)
> 
> 
> base-commit: 312e6f7676e63bbb9b81e5c68e580a9f776cc6f0

For the series:
Reviewed-by: Dawid Osuchowski <dawid.osuchowski@linux.intel.com>

Thanks,
Dawid

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH net-next 2/2] net/mlx5e: Clamp page_pool size to max
  2025-09-22  9:18 ` [PATCH net-next 2/2] net/mlx5e: Clamp page_pool size to max Tariq Toukan
@ 2025-09-23 13:10   ` Simon Horman
  2025-09-23 14:23   ` Jakub Kicinski
  1 sibling, 0 replies; 12+ messages in thread
From: Simon Horman @ 2025-09-23 13:10 UTC (permalink / raw)
  To: Tariq Toukan
  Cc: Eric Dumazet, Jakub Kicinski, Paolo Abeni, Andrew Lunn,
	David S. Miller, Saeed Mahameed, Mark Bloch, Leon Romanovsky,
	Jesper Dangaard Brouer, Ilias Apalodimas, netdev, linux-rdma,
	linux-kernel, Gal Pressman, Dragos Tatulea

On Mon, Sep 22, 2025 at 12:18:35PM +0300, Tariq Toukan wrote:
> From: Dragos Tatulea <dtatulea@nvidia.com>
> 
> When the user configures a large ring size (8K) and a large MTU (9000)
> in HW-GRO mode, the queue will fail to allocate due to the size of the
> page_pool going above the limit.
> 
> This change clamps the pool_size to the limit.
> 
> Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com>
> Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
> ---
>  drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
> index 5e007bb3bad1..e56052895776 100644
> --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
> +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
> @@ -989,6 +989,8 @@ static int mlx5e_alloc_rq(struct mlx5e_params *params,
>  		/* Create a page_pool and register it with rxq */
>  		struct page_pool_params pp_params = { 0 };
>  
> +		pool_size = min_t(u32, pool_size, PAGE_POOL_SIZE_LIMIT);

pool_size is u32 and PAGE_POOL_SIZE_LIMIT is a constant.
AFAIK min() would work just fine here.

> +
>  		pp_params.order     = 0;
>  		pp_params.flags     = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV;
>  		pp_params.pool_size = pool_size;
> -- 
> 2.31.1
> 
> 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH net-next 2/2] net/mlx5e: Clamp page_pool size to max
  2025-09-22  9:18 ` [PATCH net-next 2/2] net/mlx5e: Clamp page_pool size to max Tariq Toukan
  2025-09-23 13:10   ` Simon Horman
@ 2025-09-23 14:23   ` Jakub Kicinski
  2025-09-23 15:12     ` Dragos Tatulea
  1 sibling, 1 reply; 12+ messages in thread
From: Jakub Kicinski @ 2025-09-23 14:23 UTC (permalink / raw)
  To: Tariq Toukan
  Cc: Eric Dumazet, Paolo Abeni, Andrew Lunn, David S. Miller,
	Saeed Mahameed, Mark Bloch, Leon Romanovsky,
	Jesper Dangaard Brouer, Ilias Apalodimas, netdev, linux-rdma,
	linux-kernel, Gal Pressman, Dragos Tatulea

On Mon, 22 Sep 2025 12:18:35 +0300 Tariq Toukan wrote:
> When the user configures a large ring size (8K) and a large MTU (9000)
> in HW-GRO mode, the queue will fail to allocate due to the size of the
> page_pool going above the limit.

Please do some testing. A PP cache of 32k is just silly, you should
probably use a smaller limit.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH net-next 2/2] net/mlx5e: Clamp page_pool size to max
  2025-09-23 14:23   ` Jakub Kicinski
@ 2025-09-23 15:12     ` Dragos Tatulea
  2025-09-23 15:23       ` Jakub Kicinski
  0 siblings, 1 reply; 12+ messages in thread
From: Dragos Tatulea @ 2025-09-23 15:12 UTC (permalink / raw)
  To: Jakub Kicinski, Tariq Toukan
  Cc: Eric Dumazet, Paolo Abeni, Andrew Lunn, David S. Miller,
	Saeed Mahameed, Mark Bloch, Leon Romanovsky,
	Jesper Dangaard Brouer, Ilias Apalodimas, netdev, linux-rdma,
	linux-kernel, Gal Pressman

On Tue, Sep 23, 2025 at 07:23:56AM -0700, Jakub Kicinski wrote:
> On Mon, 22 Sep 2025 12:18:35 +0300 Tariq Toukan wrote:
> > When the user configures a large ring size (8K) and a large MTU (9000)
> > in HW-GRO mode, the queue will fail to allocate due to the size of the
> > page_pool going above the limit.
> 
> Please do some testing. A PP cache of 32k is just silly, you should
> probably use a smaller limit.
You mean clamping the pool_size to a certain limit so that the page_pool
ring size doesn't cover a full RQ when the RQ ring size is too large?

Thanks,
Dragos

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH net-next 2/2] net/mlx5e: Clamp page_pool size to max
  2025-09-23 15:12     ` Dragos Tatulea
@ 2025-09-23 15:23       ` Jakub Kicinski
  2025-09-24  0:23         ` Jakub Kicinski
  0 siblings, 1 reply; 12+ messages in thread
From: Jakub Kicinski @ 2025-09-23 15:23 UTC (permalink / raw)
  To: Dragos Tatulea
  Cc: Tariq Toukan, Eric Dumazet, Paolo Abeni, Andrew Lunn,
	David S. Miller, Saeed Mahameed, Mark Bloch, Leon Romanovsky,
	Jesper Dangaard Brouer, Ilias Apalodimas, netdev, linux-rdma,
	linux-kernel, Gal Pressman

On Tue, 23 Sep 2025 15:12:33 +0000 Dragos Tatulea wrote:
> On Tue, Sep 23, 2025 at 07:23:56AM -0700, Jakub Kicinski wrote:
> > On Mon, 22 Sep 2025 12:18:35 +0300 Tariq Toukan wrote:  
> > > When the user configures a large ring size (8K) and a large MTU (9000)
> > > in HW-GRO mode, the queue will fail to allocate due to the size of the
> > > page_pool going above the limit.  
> > 
> > Please do some testing. A PP cache of 32k is just silly, you should
> > probably use a smaller limit.  
> You mean clamping the pool_size to a certain limit so that the page_pool
> ring size doesn't cover a full RQ when the RQ ring size is too large?

Yes, 8k ring will take milliseconds to drain. We don't really need
milliseconds of page cache. By the time the driver processed the full
ring we must have gone thru 128 NAPI cycles, and the application
most likely already stated freeing the pages.

If my math is right at 80Gbps per ring and 9k MTU it takes more than a
1usec to receive a frame. So 8msec to just _receive_ a full ring worth
of data. At Meta we mostly use large rings to cover up scheduler and
IRQ masking latency.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH net-next 2/2] net/mlx5e: Clamp page_pool size to max
  2025-09-23 15:23       ` Jakub Kicinski
@ 2025-09-24  0:23         ` Jakub Kicinski
  2025-09-25 10:25           ` Dragos Tatulea
  0 siblings, 1 reply; 12+ messages in thread
From: Jakub Kicinski @ 2025-09-24  0:23 UTC (permalink / raw)
  To: Dragos Tatulea
  Cc: Tariq Toukan, Eric Dumazet, Paolo Abeni, Andrew Lunn,
	David S. Miller, Saeed Mahameed, Mark Bloch, Leon Romanovsky,
	Jesper Dangaard Brouer, Ilias Apalodimas, netdev, linux-rdma,
	linux-kernel, Gal Pressman

On Tue, 23 Sep 2025 08:23:10 -0700 Jakub Kicinski wrote:
> On Tue, 23 Sep 2025 15:12:33 +0000 Dragos Tatulea wrote:
> > On Tue, Sep 23, 2025 at 07:23:56AM -0700, Jakub Kicinski wrote:  
> > > Please do some testing. A PP cache of 32k is just silly, you should
> > > probably use a smaller limit.    
> > You mean clamping the pool_size to a certain limit so that the page_pool
> > ring size doesn't cover a full RQ when the RQ ring size is too large?  
> 
> Yes, 8k ring will take milliseconds to drain. We don't really need
> milliseconds of page cache. By the time the driver processed the full
> ring we must have gone thru 128 NAPI cycles, and the application
> most likely already stated freeing the pages.
> 
> If my math is right at 80Gbps per ring and 9k MTU it takes more than a
> 1usec to receive a frame. So 8msec to just _receive_ a full ring worth
> of data. At Meta we mostly use large rings to cover up scheduler and
> IRQ masking latency.

On second thought, let's just clamp it to 16k in the core and remove
the error. Clearly the expectations of the API are too intricate,
most drivers just use ring size as the cache size.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH net-next 2/2] net/mlx5e: Clamp page_pool size to max
  2025-09-24  0:23         ` Jakub Kicinski
@ 2025-09-25 10:25           ` Dragos Tatulea
  2025-09-25 15:03             ` Jakub Kicinski
  0 siblings, 1 reply; 12+ messages in thread
From: Dragos Tatulea @ 2025-09-25 10:25 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: Tariq Toukan, Eric Dumazet, Paolo Abeni, Andrew Lunn,
	David S. Miller, Saeed Mahameed, Mark Bloch, Leon Romanovsky,
	Jesper Dangaard Brouer, Ilias Apalodimas, netdev, linux-rdma,
	linux-kernel, Gal Pressman

On Tue, Sep 23, 2025 at 05:23:05PM -0700, Jakub Kicinski wrote:
> On Tue, 23 Sep 2025 08:23:10 -0700 Jakub Kicinski wrote:
> > On Tue, 23 Sep 2025 15:12:33 +0000 Dragos Tatulea wrote:
> > > On Tue, Sep 23, 2025 at 07:23:56AM -0700, Jakub Kicinski wrote:  
> > > > Please do some testing. A PP cache of 32k is just silly, you should
> > > > probably use a smaller limit.    
> > > You mean clamping the pool_size to a certain limit so that the page_pool
> > > ring size doesn't cover a full RQ when the RQ ring size is too large?  
> > 
> > Yes, 8k ring will take milliseconds to drain. We don't really need
> > milliseconds of page cache. By the time the driver processed the full
> > ring we must have gone thru 128 NAPI cycles, and the application
> > most likely already stated freeing the pages.
> > 
> > If my math is right at 80Gbps per ring and 9k MTU it takes more than a
> > 1usec to receive a frame. So 8msec to just _receive_ a full ring worth
> > of data. At Meta we mostly use large rings to cover up scheduler and
> > IRQ masking latency.
> 
> On second thought, let's just clamp it to 16k in the core and remove
> the error. Clearly the expectations of the API are too intricate,
> most drivers just use ring size as the cache size.
Makes sense. For my peace of mind I want to do some packet rate tests
to see that there is no perf difference and compare the page_pool stats.

Should the page_pool should print a warning when it clamps?

Also, checking for size > 32K and clamping to 16K looks a bit weird...
Should the limit be lowered to 16K alltogether?

Thanks,
Dragos

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH net-next 2/2] net/mlx5e: Clamp page_pool size to max
  2025-09-25 10:25           ` Dragos Tatulea
@ 2025-09-25 15:03             ` Jakub Kicinski
  0 siblings, 0 replies; 12+ messages in thread
From: Jakub Kicinski @ 2025-09-25 15:03 UTC (permalink / raw)
  To: Dragos Tatulea
  Cc: Tariq Toukan, Eric Dumazet, Paolo Abeni, Andrew Lunn,
	David S. Miller, Saeed Mahameed, Mark Bloch, Leon Romanovsky,
	Jesper Dangaard Brouer, Ilias Apalodimas, netdev, linux-rdma,
	linux-kernel, Gal Pressman

On Thu, 25 Sep 2025 10:25:40 +0000 Dragos Tatulea wrote:
> Should the page_pool should print a warning when it clamps?

I don't think so, goal is to avoid having all drivers copy the clamp on
their side. So if we still warn drivers will still have to worry.

> Also, checking for size > 32K and clamping to 16K looks a bit weird...
> Should the limit be lowered to 16K alltogether?

That's what I mean, replace the if (>32k) return E2BIG; with
size = min(size, 16k).

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH net-next 1/2] net: page_pool: Expose internal limit
  2025-09-22  9:18 ` [PATCH net-next 1/2] net: page_pool: Expose internal limit Tariq Toukan
@ 2025-10-24  9:11   ` Ilias Apalodimas
  0 siblings, 0 replies; 12+ messages in thread
From: Ilias Apalodimas @ 2025-10-24  9:11 UTC (permalink / raw)
  To: Tariq Toukan
  Cc: Eric Dumazet, Jakub Kicinski, Paolo Abeni, Andrew Lunn,
	David S. Miller, Saeed Mahameed, Mark Bloch, Leon Romanovsky,
	Jesper Dangaard Brouer, netdev, linux-rdma, linux-kernel,
	Gal Pressman, Dragos Tatulea

On Mon, 22 Sept 2025 at 12:19, Tariq Toukan <tariqt@nvidia.com> wrote:
>
> From: Dragos Tatulea <dtatulea@nvidia.com>
>
> page_pool_init() has a check for pool_size < 32K. But page_pool users
> have no access to this limit so there is no way to trim the pool_size in
> advance. The E2BIG error doesn't help much for retry as the driver has
> to guess the next size and retry.
>
> This patch exposes this limit to in the page_pool header.
>
> Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com>
> Signed-off-by: Tariq Toukan <tariqt@nvidia.com>

Reviewed-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>

> ---
>  include/net/page_pool/types.h | 2 ++
>  net/core/page_pool.c          | 2 +-
>  2 files changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h
> index 1509a536cb85..22aee9a65a26 100644
> --- a/include/net/page_pool/types.h
> +++ b/include/net/page_pool/types.h
> @@ -163,6 +163,8 @@ struct pp_memory_provider_params {
>         const struct memory_provider_ops *mp_ops;
>  };
>
> +#define PAGE_POOL_SIZE_LIMIT 32768
> +
>  struct page_pool {
>         struct page_pool_params_fast p;
>
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index 36a98f2bcac3..1f0fdfb02f08 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -214,7 +214,7 @@ static int page_pool_init(struct page_pool *pool,
>                 ring_qsize = pool->p.pool_size;
>
>         /* Sanity limit mem that can be pinned down */
> -       if (ring_qsize > 32768)
> +       if (ring_qsize > PAGE_POOL_SIZE_LIMIT)
>                 return -E2BIG;
>
>         /* DMA direction is either DMA_FROM_DEVICE or DMA_BIDIRECTIONAL.
> --
> 2.31.1
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2025-10-24  9:12 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-09-22  9:18 [PATCH net-next 0/2] net: page_pool: Expose size limit Tariq Toukan
2025-09-22  9:18 ` [PATCH net-next 1/2] net: page_pool: Expose internal limit Tariq Toukan
2025-10-24  9:11   ` Ilias Apalodimas
2025-09-22  9:18 ` [PATCH net-next 2/2] net/mlx5e: Clamp page_pool size to max Tariq Toukan
2025-09-23 13:10   ` Simon Horman
2025-09-23 14:23   ` Jakub Kicinski
2025-09-23 15:12     ` Dragos Tatulea
2025-09-23 15:23       ` Jakub Kicinski
2025-09-24  0:23         ` Jakub Kicinski
2025-09-25 10:25           ` Dragos Tatulea
2025-09-25 15:03             ` Jakub Kicinski
2025-09-22 10:04 ` [PATCH net-next 0/2] net: page_pool: Expose size limit Dawid Osuchowski

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).