From: Yunsheng Lin <linyunsheng@huawei.com>
To: Alexander Lobakin <aleksander.lobakin@intel.com>,
"David S. Miller" <davem@davemloft.net>,
Eric Dumazet <edumazet@google.com>,
Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>
Cc: Paul Menzel <pmenzel@molgen.mpg.de>,
Maciej Fijalkowski <maciej.fijalkowski@intel.com>,
Jesper Dangaard Brouer <hawk@kernel.org>,
Larysa Zaremba <larysa.zaremba@intel.com>,
netdev@vger.kernel.org, Alexander Duyck <alexanderduyck@fb.com>,
Ilias Apalodimas <ilias.apalodimas@linaro.org>,
linux-kernel@vger.kernel.org,
Michal Kubiak <michal.kubiak@intel.com>,
intel-wired-lan@lists.osuosl.org,
David Christensen <drc@linux.vnet.ibm.com>
Subject: Re: [Intel-wired-lan] [PATCH net-next v6 08/12] libie: add Rx buffer management (via Page Pool)
Date: Fri, 8 Dec 2023 17:28:21 +0800 [thread overview]
Message-ID: <1103fe8f-04c8-8cc4-8f1b-ff45cea22b54@huawei.com> (raw)
In-Reply-To: <20231207172010.1441468-9-aleksander.lobakin@intel.com>
On 2023/12/8 1:20, Alexander Lobakin wrote:
...
> +
> +/**
> + * libie_rx_page_pool_create - create a PP with the default libie settings
> + * @bq: buffer queue struct to fill
> + * @napi: &napi_struct covering this PP (no usage outside its poll loops)
> + *
> + * Return: 0 on success, -errno on failure.
> + */
> +int libie_rx_page_pool_create(struct libie_buf_queue *bq,
> + struct napi_struct *napi)
> +{
> + struct page_pool_params pp = {
> + .flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV,
> + .order = LIBIE_RX_PAGE_ORDER,
> + .pool_size = bq->count,
> + .nid = NUMA_NO_NODE,
Is there a reason the NUMA_NO_NODE is used here instead of
dev_to_node(napi->dev->dev.parent)?
> + .dev = napi->dev->dev.parent,
> + .netdev = napi->dev,
> + .napi = napi,
> + .dma_dir = DMA_FROM_DEVICE,
> + .offset = LIBIE_SKB_HEADROOM,
> + };
> +
> + /* HW-writeable / syncable length per one page */
> + pp.max_len = LIBIE_RX_BUF_LEN(pp.offset);
> +
> + /* HW-writeable length per buffer */
> + bq->rx_buf_len = libie_rx_hw_len(&pp);
> + /* Buffer size to allocate */
> + bq->truesize = roundup_pow_of_two(SKB_HEAD_ALIGN(pp.offset +
> + bq->rx_buf_len));
> +
> + bq->pp = page_pool_create(&pp);
> +
> + return PTR_ERR_OR_ZERO(bq->pp);
> +}
> +EXPORT_SYMBOL_NS_GPL(libie_rx_page_pool_create, LIBIE);
> +
...
> +/**
> + * libie_rx_sync_for_cpu - synchronize or recycle buffer post DMA
> + * @buf: buffer to process
> + * @len: frame length from the descriptor
> + *
> + * Process the buffer after it's written by HW. The regular path is to
> + * synchronize DMA for CPU, but in case of no data it will be immediately
> + * recycled back to its PP.
> + *
> + * Return: true when there's data to process, false otherwise.
> + */
> +static inline bool libie_rx_sync_for_cpu(const struct libie_rx_buffer *buf,
> + u32 len)
> +{
> + struct page *page = buf->page;
> +
> + /* Very rare, but possible case. The most common reason:
> + * the last fragment contained FCS only, which was then
> + * stripped by the HW.
> + */
> + if (unlikely(!len)) {
> + page_pool_recycle_direct(page->pp, page);
> + return false;
> + }
> +
> + page_pool_dma_sync_for_cpu(page->pp, page, buf->offset, len);
Is there a reason why page_pool_dma_sync_for_cpu() is still used when
page_pool_create() is called with PP_FLAG_DMA_SYNC_DEV flag? Isn't syncing
already handled in page_pool core when when PP_FLAG_DMA_SYNC_DEV flag is
set?
> +
> + return true;
> +}
>
> /* O(1) converting i40e/ice/iavf's 8/10-bit hardware packet type to a parsed
> * bitfield struct.
>
_______________________________________________
Intel-wired-lan mailing list
Intel-wired-lan@osuosl.org
https://lists.osuosl.org/mailman/listinfo/intel-wired-lan
next prev parent reply other threads:[~2023-12-08 9:28 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-12-07 17:19 [Intel-wired-lan] [PATCH net-next v6 00/12] net: intel: start The Great Code Dedup + Page Pool for iavf Alexander Lobakin
2023-12-07 17:19 ` [Intel-wired-lan] [PATCH net-next v6 01/12] page_pool: make sure frag API fields don't span between cachelines Alexander Lobakin
2023-12-07 17:20 ` [Intel-wired-lan] [PATCH net-next v6 02/12] page_pool: don't use driver-set flags field directly Alexander Lobakin
2023-12-07 17:20 ` [Intel-wired-lan] [PATCH net-next v6 03/12] net: intel: introduce Intel Ethernet common library Alexander Lobakin
2023-12-07 17:20 ` [Intel-wired-lan] [PATCH net-next v6 04/12] iavf: kill "legacy-rx" for good Alexander Lobakin
2023-12-09 1:15 ` Jakub Kicinski
2023-12-07 17:20 ` [Intel-wired-lan] [PATCH net-next v6 05/12] iavf: drop page splitting and recycling Alexander Lobakin
2023-12-07 17:20 ` [Intel-wired-lan] [PATCH net-next v6 06/12] page_pool: constify some read-only function arguments Alexander Lobakin
2023-12-12 16:08 ` Ilias Apalodimas
2023-12-12 16:14 ` Ilias Apalodimas
2023-12-07 17:20 ` [Intel-wired-lan] [PATCH net-next v6 07/12] page_pool: add DMA-sync-for-CPU inline helper Alexander Lobakin
2023-12-07 17:20 ` [Intel-wired-lan] [PATCH net-next v6 08/12] libie: add Rx buffer management (via Page Pool) Alexander Lobakin
2023-12-08 9:28 ` Yunsheng Lin [this message]
2023-12-08 11:28 ` Yunsheng Lin
2023-12-11 10:16 ` Alexander Lobakin
2023-12-11 19:23 ` Jakub Kicinski
2023-12-13 11:23 ` Alexander Lobakin
2023-12-07 17:20 ` [Intel-wired-lan] [PATCH net-next v6 09/12] iavf: pack iavf_ring more efficiently Alexander Lobakin
2023-12-07 17:20 ` [Intel-wired-lan] [PATCH net-next v6 10/12] iavf: switch to Page Pool Alexander Lobakin
2023-12-07 17:20 ` [Intel-wired-lan] [PATCH net-next v6 11/12] libie: add common queue stats Alexander Lobakin
2023-12-07 17:20 ` [Intel-wired-lan] [PATCH net-next v6 12/12] iavf: switch queue stats to libie Alexander Lobakin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1103fe8f-04c8-8cc4-8f1b-ff45cea22b54@huawei.com \
--to=linyunsheng@huawei.com \
--cc=aleksander.lobakin@intel.com \
--cc=alexanderduyck@fb.com \
--cc=davem@davemloft.net \
--cc=drc@linux.vnet.ibm.com \
--cc=edumazet@google.com \
--cc=hawk@kernel.org \
--cc=ilias.apalodimas@linaro.org \
--cc=intel-wired-lan@lists.osuosl.org \
--cc=kuba@kernel.org \
--cc=larysa.zaremba@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=maciej.fijalkowski@intel.com \
--cc=michal.kubiak@intel.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=pmenzel@molgen.mpg.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox