From: Randy Dunlap <rdunlap@infradead.org>
To: Yunsheng Lin <linyunsheng@huawei.com>,
davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com
Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
Lorenzo Bianconi <lorenzo@kernel.org>,
Alexander Duyck <alexander.duyck@gmail.com>,
Liang Chen <liangchen.linux@gmail.com>,
Alexander Lobakin <aleksander.lobakin@intel.com>,
Jesper Dangaard Brouer <hawk@kernel.org>,
Ilias Apalodimas <ilias.apalodimas@linaro.org>,
Eric Dumazet <edumazet@google.com>,
Jonathan Corbet <corbet@lwn.net>,
Alexei Starovoitov <ast@kernel.org>,
Daniel Borkmann <daniel@iogearbox.net>,
John Fastabend <john.fastabend@gmail.com>,
linux-doc@vger.kernel.org, bpf@vger.kernel.org
Subject: Re: [PATCH v5 RFC 5/6] page_pool: update document about frag API
Date: Thu, 29 Jun 2023 13:30:42 -0700 [thread overview]
Message-ID: <dc48e465-c422-d9c4-a28e-7ed97950e1c8@infradead.org> (raw)
In-Reply-To: <20230629120226.14854-6-linyunsheng@huawei.com>
Hi--
On 6/29/23 05:02, Yunsheng Lin wrote:
> As more drivers begin to use the frag API, update the
> document about how to decide which API to use for the
> driver author.
>
> Also it seems there is a similar document in page_pool.h,
> so remove it to avoid the duplication.
>
> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
> CC: Lorenzo Bianconi <lorenzo@kernel.org>
> CC: Alexander Duyck <alexander.duyck@gmail.com>
> CC: Liang Chen <liangchen.linux@gmail.com>
> CC: Alexander Lobakin <aleksander.lobakin@intel.com>
> ---
> Documentation/networking/page_pool.rst | 34 ++++++++++++++++++++++----
> include/net/page_pool.h | 22 -----------------
> 2 files changed, 29 insertions(+), 27 deletions(-)
>
> diff --git a/Documentation/networking/page_pool.rst b/Documentation/networking/page_pool.rst
> index 873efd97f822..18b13d659c98 100644
> --- a/Documentation/networking/page_pool.rst
> +++ b/Documentation/networking/page_pool.rst
> @@ -4,12 +4,27 @@
> Page Pool API
> =============
>
> -The page_pool allocator is optimized for the XDP mode that uses one frame
> -per-page, but it can fallback on the regular page allocator APIs.
> +The page_pool allocator is optimized for recycling page or page frag used by skb
> +packet and xdp frame.
That sentence could use some adjectives. Choose singular or plural:
> +The page_pool allocator is optimized for recycling a page or page frag used by an skb
> +packet or xdp frame.
or
> +The page_pool allocator is optimized for recycling pages or page frags used by skb
> +packets or xdp frames.
Now that I have written them, I prefer the latter one (plural). FWIW.
>
> -Basic use involves replacing alloc_pages() calls with the
> -page_pool_alloc_pages() call. Drivers should use page_pool_dev_alloc_pages()
> -replacing dev_alloc_pages().
> +Basic use involves replacing napi_alloc_frag() and alloc_pages() calls with
> +page_pool_cache_alloc() and page_pool_alloc(), which allocate memory with or
> +without page splitting depending on the requested memory size.
> +
> +If the driver knows that it always requires full pages or its allocates are
allocations are
> +always smaller than half a page, it can use one of the more specific API calls:
> +
> +1. page_pool_alloc_pages(): allocate memory without page splitting when driver
> + knows that the memory it need is always bigger than half of the page
> + allocated from page pool. There is no cache line dirtying for 'struct page'
> + when a page is recycled back to the page pool.
> +
> +2. page_pool_alloc_frag(): allocate memory with page splitting when driver knows
> + that the memory it need is always smaller than or equal to half of the page
> + allocated from page pool. Page splitting enables memory saving and thus avoid
and thus avoids
> + TLB/cache miss for data access, but there also is some cost to implement page
> + splitting, mainly some cache line dirtying/bouncing for 'struct page' and
> + atomic operation for page->pp_frag_count.
>
> API keeps track of in-flight pages, in order to let API user know
> when it is safe to free a page_pool object. Thus, API users
> @@ -93,6 +108,15 @@ a page will cause no race conditions is enough.
> * page_pool_dev_alloc_pages(): Get a page from the page allocator or page_pool
> caches.
>
> +* page_pool_dev_alloc_frag(): Get a page frag from the page allocator or
> + page_pool caches.
> +
> +* page_pool_dev_alloc(): Get a page or page frag from the page allocator or
> + page_pool caches.
> +
> +* page_pool_dev_cache_alloc(): Get a cache from the page allocator or page_pool
> + caches.
> +
> * page_pool_get_dma_addr(): Retrieve the stored DMA address.
>
> * page_pool_get_dma_dir(): Retrieve the stored DMA direction.
Thanks for adding the documentation.
--
~Randy
next prev parent reply other threads:[~2023-06-29 20:30 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-06-29 12:02 [PATCH v5 RFC 0/6] introduce page_pool_alloc() API Yunsheng Lin
2023-06-29 12:02 ` [PATCH v5 RFC 1/6] page_pool: frag API support for 32-bit arch with 64-bit DMA Yunsheng Lin
2023-07-07 23:59 ` Jakub Kicinski
2023-07-09 12:39 ` Yunsheng Lin
2023-07-10 18:36 ` Jakub Kicinski
2023-07-08 0:01 ` Jakub Kicinski
2023-07-09 12:54 ` Yunsheng Lin
2023-07-10 18:38 ` Jakub Kicinski
2023-07-11 10:59 ` Alexander Lobakin
2023-07-11 16:37 ` Jakub Kicinski
2023-07-11 16:59 ` Alexander Lobakin
2023-07-11 20:09 ` Jakub Kicinski
2023-07-12 12:34 ` Yunsheng Lin
2023-07-12 17:26 ` Jakub Kicinski
2023-07-14 12:16 ` Yunsheng Lin
2023-07-14 13:44 ` Jesper Dangaard Brouer
2023-07-14 17:52 ` Jakub Kicinski
2023-07-17 12:33 ` Yunsheng Lin
2023-07-18 18:16 ` Jakub Kicinski
2023-07-18 18:28 ` Alexander Lobakin
2023-07-19 12:21 ` Yunsheng Lin
2023-06-29 12:02 ` [PATCH v5 RFC 2/6] page_pool: unify frag_count handling in page_pool_is_last_frag() Yunsheng Lin
2023-07-10 14:39 ` Alexander Lobakin
2023-07-11 11:47 ` Yunsheng Lin
2023-06-29 12:02 ` [PATCH v5 RFC 3/6] page_pool: introduce page_pool[_cache]_alloc() API Yunsheng Lin
2023-06-29 12:02 ` [PATCH v5 RFC 4/6] page_pool: remove PP_FLAG_PAGE_FRAG flag Yunsheng Lin
2023-06-29 12:02 ` [PATCH v5 RFC 5/6] page_pool: update document about frag API Yunsheng Lin
2023-06-29 20:30 ` Randy Dunlap [this message]
2023-06-29 12:02 ` [PATCH v5 RFC 6/6] net: veth: use newly added page pool API for veth with xdp Yunsheng Lin
2023-06-29 14:26 ` [PATCH v5 RFC 0/6] introduce page_pool_alloc() API Alexander Lobakin
2023-06-30 11:57 ` Yunsheng Lin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=dc48e465-c422-d9c4-a28e-7ed97950e1c8@infradead.org \
--to=rdunlap@infradead.org \
--cc=aleksander.lobakin@intel.com \
--cc=alexander.duyck@gmail.com \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=corbet@lwn.net \
--cc=daniel@iogearbox.net \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=hawk@kernel.org \
--cc=ilias.apalodimas@linaro.org \
--cc=john.fastabend@gmail.com \
--cc=kuba@kernel.org \
--cc=liangchen.linux@gmail.com \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linyunsheng@huawei.com \
--cc=lorenzo@kernel.org \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).