From: Alexander Lobakin <aleksander.lobakin@intel.com>
To: Yunsheng Lin <yunshenglin0825@gmail.com>
Cc: Paul Menzel <pmenzel@molgen.mpg.de>,
Jesper Dangaard Brouer <hawk@kernel.org>,
Larysa Zaremba <larysa.zaremba@intel.com>,
netdev@vger.kernel.org, Alexander Duyck <alexanderduyck@fb.com>,
Ilias Apalodimas <ilias.apalodimas@linaro.org>,
Eric Dumazet <edumazet@google.com>,
linux-kernel@vger.kernel.org,
Yunsheng Lin <linyunsheng@huawei.com>,
Michal Kubiak <michal.kubiak@intel.com>,
intel-wired-lan@lists.osuosl.org,
David Christensen <drc@linux.vnet.ibm.com>,
Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
"David S. Miller" <davem@davemloft.net>
Subject: Re: [Intel-wired-lan] [PATCH RFC net-next v4 6/9] iavf: switch to Page Pool
Date: Mon, 10 Jul 2023 15:34:28 +0200 [thread overview]
Message-ID: <95c5ba92-bccd-6a9a-5373-606a482e36a3@intel.com> (raw)
In-Reply-To: <4946b9df-66ea-d184-b97c-0ba687e41df8@gmail.com>
From: Yunsheng Lin <yunshenglin0825@gmail.com>
Date: Sun, 9 Jul 2023 13:16:39 +0800
> On 2023/7/7 0:38, Alexander Lobakin wrote:
>
> ...
>
>>>
>>>> /**
>>>> @@ -766,13 +742,19 @@ void iavf_free_rx_resources(struct iavf_ring *rx_ring)
>>>> **/
>>>> int iavf_setup_rx_descriptors(struct iavf_ring *rx_ring)
>>>> {
>>>> - struct device *dev = rx_ring->dev;
>>>> - int bi_size;
>>>> + struct page_pool *pool;
>>>> +
>>>> + pool = libie_rx_page_pool_create(&rx_ring->q_vector->napi,
>>>> + rx_ring->count);
>>>
>>> If a page is able to be spilt between more than one desc, perhaps the
>>> prt_ring size does not need to be as big as rx_ring->count.
>>
>> But we doesn't know in advance, right? Esp. given that it's hidden in
>> the lib. But anyway, you can only assume that in regular cases if you
>> always allocate frags of the same size, PP will split pages when 2+
>> frags can fit there or return the whole page otherwise, but who knows
>> what might happen.
>
> It seems intel driver is able to know the size of memory it needs when
> creating the ring/queue/napi/pp, maybe the driver only tell the libie
> how many descs does it use for queue, and libie can adjust it accordingly?
But libie can't say for sure how PP will split pages for it, right?
>
>> BTW, with recent recycling optimization, most of recycling is done
>> directly through cache, not ptr_ring. So I'd even say it's safe to start
>> creating smaller ptr_rings in the drivers.
>
> The problem is that we may use more memory than before for certain case
> if we don't limit the size of ptr_ring, unless we can ensure all of
> recycling is done directly through cache, not ptr_ring.
Also not sure I'm following =\
[...]
Thanks,
Olek
_______________________________________________
Intel-wired-lan mailing list
Intel-wired-lan@osuosl.org
https://lists.osuosl.org/mailman/listinfo/intel-wired-lan
next prev parent reply other threads:[~2023-07-10 13:36 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-07-05 15:55 [Intel-wired-lan] [PATCH RFC net-next v4 0/9] net: intel: start The Great Code Dedup + Page Pool for iavf Alexander Lobakin
2023-07-05 15:55 ` [Intel-wired-lan] [PATCH RFC net-next v4 1/9] net: intel: introduce Intel Ethernet common library Alexander Lobakin
2023-07-14 14:17 ` Przemek Kitszel
2023-07-05 15:55 ` [Intel-wired-lan] [PATCH RFC net-next v4 2/9] iavf: kill "legacy-rx" for good Alexander Lobakin
2023-07-14 14:17 ` Przemek Kitszel
2023-07-05 15:55 ` [Intel-wired-lan] [PATCH RFC net-next v4 3/9] iavf: drop page splitting and recycling Alexander Lobakin
2023-07-06 14:47 ` Alexander Duyck
2023-07-06 16:45 ` Alexander Lobakin
2023-07-06 17:06 ` Alexander Duyck
2023-07-10 13:13 ` Alexander Lobakin
2023-07-05 15:55 ` [Intel-wired-lan] [PATCH RFC net-next v4 4/9] net: page_pool: add DMA-sync-for-CPU inline helpers Alexander Lobakin
2023-07-05 15:55 ` [Intel-wired-lan] [PATCH RFC net-next v4 5/9] libie: add Rx buffer management (via Page Pool) Alexander Lobakin
2023-07-06 12:47 ` Yunsheng Lin
2023-07-06 16:28 ` Alexander Lobakin
2023-07-09 5:16 ` Yunsheng Lin
2023-07-10 13:25 ` Alexander Lobakin
2023-07-11 11:39 ` Yunsheng Lin
2023-07-11 16:37 ` Alexander Lobakin
2023-07-12 11:13 ` Yunsheng Lin
2023-07-05 15:55 ` [Intel-wired-lan] [PATCH RFC net-next v4 6/9] iavf: switch to Page Pool Alexander Lobakin
2023-07-06 12:47 ` Yunsheng Lin
2023-07-06 16:38 ` Alexander Lobakin
2023-07-09 5:16 ` Yunsheng Lin
2023-07-10 13:34 ` Alexander Lobakin [this message]
2023-07-11 11:47 ` Yunsheng Lin
2023-07-18 13:56 ` Alexander Lobakin
2023-07-06 15:26 ` Alexander Duyck
2023-07-06 16:56 ` Alexander Lobakin
2023-07-06 17:28 ` Alexander Duyck
2023-07-10 13:18 ` Alexander Lobakin
2023-07-05 15:55 ` [Intel-wired-lan] [PATCH RFC net-next v4 7/9] libie: add common queue stats Alexander Lobakin
2023-07-05 15:55 ` [Intel-wired-lan] [PATCH RFC net-next v4 8/9] libie: add per-queue Page Pool stats Alexander Lobakin
2023-07-05 15:55 ` [Intel-wired-lan] [PATCH RFC net-next v4 9/9] iavf: switch queue stats to libie Alexander Lobakin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=95c5ba92-bccd-6a9a-5373-606a482e36a3@intel.com \
--to=aleksander.lobakin@intel.com \
--cc=alexanderduyck@fb.com \
--cc=davem@davemloft.net \
--cc=drc@linux.vnet.ibm.com \
--cc=edumazet@google.com \
--cc=hawk@kernel.org \
--cc=ilias.apalodimas@linaro.org \
--cc=intel-wired-lan@lists.osuosl.org \
--cc=kuba@kernel.org \
--cc=larysa.zaremba@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linyunsheng@huawei.com \
--cc=michal.kubiak@intel.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=pmenzel@molgen.mpg.de \
--cc=yunshenglin0825@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox