From: Jakub Kicinski <kuba@kernel.org>
To: Dragos Tatulea <dtatulea@nvidia.com>
Cc: Tariq Toukan <tariqt@nvidia.com>,
Eric Dumazet <edumazet@google.com>,
Paolo Abeni <pabeni@redhat.com>,
Andrew Lunn <andrew+netdev@lunn.ch>,
"David S. Miller" <davem@davemloft.net>,
Saeed Mahameed <saeedm@nvidia.com>,
Mark Bloch <mbloch@nvidia.com>, Leon Romanovsky <leon@kernel.org>,
Jesper Dangaard Brouer <hawk@kernel.org>,
Ilias Apalodimas <ilias.apalodimas@linaro.org>,
netdev@vger.kernel.org, linux-rdma@vger.kernel.org,
linux-kernel@vger.kernel.org, Gal Pressman <gal@nvidia.com>
Subject: Re: [PATCH net-next 2/2] net/mlx5e: Clamp page_pool size to max
Date: Tue, 23 Sep 2025 17:23:05 -0700 [thread overview]
Message-ID: <20250923172305.0b0a235c@kernel.org> (raw)
In-Reply-To: <20250923082310.2316e34d@kernel.org>
On Tue, 23 Sep 2025 08:23:10 -0700 Jakub Kicinski wrote:
> On Tue, 23 Sep 2025 15:12:33 +0000 Dragos Tatulea wrote:
> > On Tue, Sep 23, 2025 at 07:23:56AM -0700, Jakub Kicinski wrote:
> > > Please do some testing. A PP cache of 32k is just silly, you should
> > > probably use a smaller limit.
> > You mean clamping the pool_size to a certain limit so that the page_pool
> > ring size doesn't cover a full RQ when the RQ ring size is too large?
>
> Yes, 8k ring will take milliseconds to drain. We don't really need
> milliseconds of page cache. By the time the driver processed the full
> ring we must have gone thru 128 NAPI cycles, and the application
> most likely already stated freeing the pages.
>
> If my math is right at 80Gbps per ring and 9k MTU it takes more than a
> 1usec to receive a frame. So 8msec to just _receive_ a full ring worth
> of data. At Meta we mostly use large rings to cover up scheduler and
> IRQ masking latency.
On second thought, let's just clamp it to 16k in the core and remove
the error. Clearly the expectations of the API are too intricate,
most drivers just use ring size as the cache size.
next prev parent reply other threads:[~2025-09-24 0:23 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-22 9:18 [PATCH net-next 0/2] net: page_pool: Expose size limit Tariq Toukan
2025-09-22 9:18 ` [PATCH net-next 1/2] net: page_pool: Expose internal limit Tariq Toukan
2025-10-24 9:11 ` Ilias Apalodimas
2025-09-22 9:18 ` [PATCH net-next 2/2] net/mlx5e: Clamp page_pool size to max Tariq Toukan
2025-09-23 13:10 ` Simon Horman
2025-09-23 14:23 ` Jakub Kicinski
2025-09-23 15:12 ` Dragos Tatulea
2025-09-23 15:23 ` Jakub Kicinski
2025-09-24 0:23 ` Jakub Kicinski [this message]
2025-09-25 10:25 ` Dragos Tatulea
2025-09-25 15:03 ` Jakub Kicinski
2025-09-22 10:04 ` [PATCH net-next 0/2] net: page_pool: Expose size limit Dawid Osuchowski
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250923172305.0b0a235c@kernel.org \
--to=kuba@kernel.org \
--cc=andrew+netdev@lunn.ch \
--cc=davem@davemloft.net \
--cc=dtatulea@nvidia.com \
--cc=edumazet@google.com \
--cc=gal@nvidia.com \
--cc=hawk@kernel.org \
--cc=ilias.apalodimas@linaro.org \
--cc=leon@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=mbloch@nvidia.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=saeedm@nvidia.com \
--cc=tariqt@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).