From: Jakub Kicinski <kuba@kernel.org>
To: Alexander Lobakin <aleksander.lobakin@intel.com>
Cc: Ratheesh Kannoth <rkannoth@marvell.com>,
"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"Sunil Kovvuri Goutham" <sgoutham@marvell.com>,
Geethasowjanya Akula <gakula@marvell.com>,
Subbaraya Sundeep Bhatta <sbhatta@marvell.com>,
Hariprasad Kelam <hkelam@marvell.com>,
"davem@davemloft.net" <davem@davemloft.net>,
"edumazet@google.com" <edumazet@google.com>,
"pabeni@redhat.com" <pabeni@redhat.com>
Subject: Re: [EXT] Re: [PATCH net] octeontx2-pf: Set maximum queue size to 16K
Date: Fri, 4 Aug 2023 13:35:12 -0700 [thread overview]
Message-ID: <20230804133512.4dbbbc16@kernel.org> (raw)
In-Reply-To: <8732499b-df8c-0ee0-bf0e-815736cf4de2@intel.com>
On Fri, 4 Aug 2023 16:43:51 +0200 Alexander Lobakin wrote:
> > So, will clamp to 2048 in page_pool_init() ? But it looks odd to me, as
> > User requests > 2048, but will never be aware that it is clamped to 2048.
>
> Why should he be aware of that? :D
> But seriously, I can't just say: "hey, I promise you that your driver
> will work best when PP size is clamped to 2048, just blindly follow",
> it's more of a preference right now. Because...
>
> > Better do this clamping in Driver and print a warning message ?
>
> ...because you just need to test your driver with different PP sizes and
> decide yourself which upper cap to set. If it works the same when queues
> are 16k and PPs are 2k versus 16k + 16k -- fine, you can stop on that.
> If 16k + 16k or 16 + 8 or whatever works better -- stop on that. No hard
> reqs.
>
> Just don't cap maximum queue length due to PP sanity check, it doesn't
> make sense.
IDK if I agree with you here :S Tuning this in the driver relies on
the assumption that the HW / driver is the thing that matters.
I'd think that the workload, platform (CPU) and config (e.g. is IOMMU
enabled?) will matter at least as much. While driver developers will end
up tuning to whatever servers they have, random single config and most
likely.. iperf.
IMO it's much better to re-purpose "pool_size" and treat it as the ring
size, because that's what most drivers end up putting there.
Defer tuning of the effective ring size to the core and user input
(via the "it will be added any minute now" netlink API for configuring
page pools)...
So capping the recycle ring to 32k instead of returning the error seems
like an okay solution for now.
next prev parent reply other threads:[~2023-08-04 20:35 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-08-02 10:52 [PATCH net] octeontx2-pf: Set maximum queue size to 16K Ratheesh Kannoth
2023-08-02 16:11 ` Alexander Lobakin
2023-08-03 1:13 ` Jakub Kicinski
2023-08-03 2:08 ` Ratheesh Kannoth
2023-08-03 15:07 ` Alexander Lobakin
2023-08-04 2:25 ` [EXT] " Ratheesh Kannoth
2023-08-04 14:43 ` Alexander Lobakin
2023-08-04 20:35 ` Jakub Kicinski [this message]
2023-08-07 2:51 ` Ratheesh Kannoth
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230804133512.4dbbbc16@kernel.org \
--to=kuba@kernel.org \
--cc=aleksander.lobakin@intel.com \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=gakula@marvell.com \
--cc=hkelam@marvell.com \
--cc=linux-kernel@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=rkannoth@marvell.com \
--cc=sbhatta@marvell.com \
--cc=sgoutham@marvell.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).