From: Alexander Lobakin <aleksander.lobakin@intel.com>
To: Yunsheng Lin <linyunsheng@huawei.com>
Cc: Paul Menzel <pmenzel@molgen.mpg.de>,
Maciej Fijalkowski <maciej.fijalkowski@intel.com>,
Jesper Dangaard Brouer <hawk@kernel.org>,
Larysa Zaremba <larysa.zaremba@intel.com>,
netdev@vger.kernel.org, Alexander Duyck <alexanderduyck@fb.com>,
Ilias Apalodimas <ilias.apalodimas@linaro.org>,
linux-kernel@vger.kernel.org, Eric Dumazet <edumazet@google.com>,
Michal Kubiak <michal.kubiak@intel.com>,
intel-wired-lan@lists.osuosl.org,
David Christensen <drc@linux.vnet.ibm.com>,
Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
"David S. Miller" <davem@davemloft.net>
Subject: Re: [Intel-wired-lan] [PATCH net-next v6 08/12] libie: add Rx buffer management (via Page Pool)
Date: Mon, 11 Dec 2023 11:16:20 +0100 [thread overview]
Message-ID: <03d7e8b0-8766-4f59-afd4-15b592693a83@intel.com> (raw)
In-Reply-To: <1103fe8f-04c8-8cc4-8f1b-ff45cea22b54@huawei.com>
From: Yunsheng Lin <linyunsheng@huawei.com>
Date: Fri, 8 Dec 2023 17:28:21 +0800
> On 2023/12/8 1:20, Alexander Lobakin wrote:
> ...
>
>> +
>> +/**
>> + * libie_rx_page_pool_create - create a PP with the default libie settings
>> + * @bq: buffer queue struct to fill
>> + * @napi: &napi_struct covering this PP (no usage outside its poll loops)
>> + *
>> + * Return: 0 on success, -errno on failure.
>> + */
>> +int libie_rx_page_pool_create(struct libie_buf_queue *bq,
>> + struct napi_struct *napi)
>> +{
>> + struct page_pool_params pp = {
>> + .flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV,
>> + .order = LIBIE_RX_PAGE_ORDER,
>> + .pool_size = bq->count,
>> + .nid = NUMA_NO_NODE,
>
> Is there a reason the NUMA_NO_NODE is used here instead of
> dev_to_node(napi->dev->dev.parent)?
NUMA_NO_NODE creates a "dynamic" page_pool and makes sure the pages are
local to the CPU where PP allocation functions are called. Setting ::nid
to a "static" value pins the PP to a particular node.
But the main reason is that Rx queues can be distributed across several
nodes and in that case NUMA_NO_NODE will make sure each page_pool is
local to the queue it's running on. dev_to_node() will return the same
value, thus forcing some PPs to allocate remote pages.
Ideally, I'd like to pass a CPU ID this queue will be run on and use
cpu_to_node(), but currently there's no NUMA-aware allocations in the
Intel drivers and Rx queues don't get the corresponding CPU ID when
configuring. I may revisit this later, but for now NUMA_NO_NODE is the
most optimal solution here.
[...]
Thanks,
Olek
_______________________________________________
Intel-wired-lan mailing list
Intel-wired-lan@osuosl.org
https://lists.osuosl.org/mailman/listinfo/intel-wired-lan
next prev parent reply other threads:[~2023-12-11 10:17 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-12-07 17:19 [Intel-wired-lan] [PATCH net-next v6 00/12] net: intel: start The Great Code Dedup + Page Pool for iavf Alexander Lobakin
2023-12-07 17:19 ` [Intel-wired-lan] [PATCH net-next v6 01/12] page_pool: make sure frag API fields don't span between cachelines Alexander Lobakin
2023-12-07 17:20 ` [Intel-wired-lan] [PATCH net-next v6 02/12] page_pool: don't use driver-set flags field directly Alexander Lobakin
2023-12-07 17:20 ` [Intel-wired-lan] [PATCH net-next v6 03/12] net: intel: introduce Intel Ethernet common library Alexander Lobakin
2023-12-07 17:20 ` [Intel-wired-lan] [PATCH net-next v6 04/12] iavf: kill "legacy-rx" for good Alexander Lobakin
2023-12-09 1:15 ` Jakub Kicinski
2023-12-07 17:20 ` [Intel-wired-lan] [PATCH net-next v6 05/12] iavf: drop page splitting and recycling Alexander Lobakin
2023-12-07 17:20 ` [Intel-wired-lan] [PATCH net-next v6 06/12] page_pool: constify some read-only function arguments Alexander Lobakin
2023-12-12 16:08 ` Ilias Apalodimas
2023-12-12 16:14 ` Ilias Apalodimas
2023-12-07 17:20 ` [Intel-wired-lan] [PATCH net-next v6 07/12] page_pool: add DMA-sync-for-CPU inline helper Alexander Lobakin
2023-12-07 17:20 ` [Intel-wired-lan] [PATCH net-next v6 08/12] libie: add Rx buffer management (via Page Pool) Alexander Lobakin
2023-12-08 9:28 ` Yunsheng Lin
2023-12-08 11:28 ` Yunsheng Lin
2023-12-11 10:16 ` Alexander Lobakin [this message]
2023-12-11 19:23 ` Jakub Kicinski
2023-12-13 11:23 ` Alexander Lobakin
2023-12-07 17:20 ` [Intel-wired-lan] [PATCH net-next v6 09/12] iavf: pack iavf_ring more efficiently Alexander Lobakin
2023-12-07 17:20 ` [Intel-wired-lan] [PATCH net-next v6 10/12] iavf: switch to Page Pool Alexander Lobakin
2023-12-07 17:20 ` [Intel-wired-lan] [PATCH net-next v6 11/12] libie: add common queue stats Alexander Lobakin
2023-12-07 17:20 ` [Intel-wired-lan] [PATCH net-next v6 12/12] iavf: switch queue stats to libie Alexander Lobakin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=03d7e8b0-8766-4f59-afd4-15b592693a83@intel.com \
--to=aleksander.lobakin@intel.com \
--cc=alexanderduyck@fb.com \
--cc=davem@davemloft.net \
--cc=drc@linux.vnet.ibm.com \
--cc=edumazet@google.com \
--cc=hawk@kernel.org \
--cc=ilias.apalodimas@linaro.org \
--cc=intel-wired-lan@lists.osuosl.org \
--cc=kuba@kernel.org \
--cc=larysa.zaremba@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linyunsheng@huawei.com \
--cc=maciej.fijalkowski@intel.com \
--cc=michal.kubiak@intel.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=pmenzel@molgen.mpg.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox