From: Alexander Lobakin <aleksander.lobakin@intel.com>
To: Ratheesh Kannoth <rkannoth@marvell.com>
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"Sunil Kovvuri Goutham" <sgoutham@marvell.com>,
Geethasowjanya Akula <gakula@marvell.com>,
Subbaraya Sundeep Bhatta <sbhatta@marvell.com>,
Hariprasad Kelam <hkelam@marvell.com>,
"davem@davemloft.net" <davem@davemloft.net>,
"edumazet@google.com" <edumazet@google.com>,
"kuba@kernel.org" <kuba@kernel.org>,
"pabeni@redhat.com" <pabeni@redhat.com>
Subject: Re: [PATCH net] octeontx2-pf: Set maximum queue size to 16K
Date: Thu, 3 Aug 2023 17:07:07 +0200 [thread overview]
Message-ID: <f04cf074-1cff-d30a-4237-ad11f62290b1@intel.com> (raw)
In-Reply-To: <CY4PR1801MB1911E15D518A77535F6E51E2D308A@CY4PR1801MB1911.namprd18.prod.outlook.com>
From: Ratheesh Kannoth <rkannoth@marvell.com>
Date: Thu, 3 Aug 2023 02:08:18 +0000
>> From: Alexander Lobakin <aleksander.lobakin@intel.com>
>> Sent: Wednesday, August 2, 2023 9:42 PM
>> To: Ratheesh Kannoth <rkannoth@marvell.com>
>> Subject: [EXT] Re: [PATCH net] octeontx2-pf: Set maximum queue size to 16K
>
>> +ring->rx_max_pending = 16384; /* Page pool support on RX */
>>
>> This is very hardcodish. Why not limit the Page Pool size when creating
>> instead? It's perfectly fine to have a queue with 64k descriptors and a Page
>> Pool with only ("only" :D) 16k elements.
>> Page Pool size affects only the size of the embedded ptr_ring, which is used
>> for indirect (locking) recycling. I would even recommend to not go past 2k for
>> PP sizes, it makes no sense and only consumes memory.
>
> These recycling will impact on performance, right ? else, why didn't page pool made this size as constant.
Page Pool doesn't need huge ptr_ring sizes to successfully recycle
pages. Especially given that the recent PP optimizations made locking
recycling happen much more rarely.
If you prove with some performance numbers that creating page_pools with
the ptr_ring size of 2k when the rings have 32k descriptors really hurt
the throughput comparing to 16k PP + 32k rings, I'll change my mind.
Re "size as constant" -- because lots of NICs don't need more than 256
or 512 descriptors and it would be only a waste to create page_pools
with huge ptr_rings for them. Queue sizes bigger than 1024 (ok, maybe
2048) is the moment when the linear scale stops working. That's why I
believe that going out of [64, 2048] for page_pools doesn't make much sense.
Thanks,
Olek
next prev parent reply other threads:[~2023-08-03 15:09 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-08-02 10:52 [PATCH net] octeontx2-pf: Set maximum queue size to 16K Ratheesh Kannoth
2023-08-02 16:11 ` Alexander Lobakin
2023-08-03 1:13 ` Jakub Kicinski
2023-08-03 2:08 ` Ratheesh Kannoth
2023-08-03 15:07 ` Alexander Lobakin [this message]
2023-08-04 2:25 ` [EXT] " Ratheesh Kannoth
2023-08-04 14:43 ` Alexander Lobakin
2023-08-04 20:35 ` Jakub Kicinski
2023-08-07 2:51 ` Ratheesh Kannoth
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=f04cf074-1cff-d30a-4237-ad11f62290b1@intel.com \
--to=aleksander.lobakin@intel.com \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=gakula@marvell.com \
--cc=hkelam@marvell.com \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=rkannoth@marvell.com \
--cc=sbhatta@marvell.com \
--cc=sgoutham@marvell.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox