linux-doc.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Randy Dunlap <rdunlap@infradead.org>
To: Yunsheng Lin <linyunsheng@huawei.com>,
	davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com
Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	Lorenzo Bianconi <lorenzo@kernel.org>,
	Alexander Duyck <alexander.duyck@gmail.com>,
	Liang Chen <liangchen.linux@gmail.com>,
	Alexander Lobakin <aleksander.lobakin@intel.com>,
	Jesper Dangaard Brouer <hawk@kernel.org>,
	Ilias Apalodimas <ilias.apalodimas@linaro.org>,
	Eric Dumazet <edumazet@google.com>,
	Jonathan Corbet <corbet@lwn.net>,
	Alexei Starovoitov <ast@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	John Fastabend <john.fastabend@gmail.com>,
	linux-doc@vger.kernel.org, bpf@vger.kernel.org
Subject: Re: [PATCH net-next v6 5/6] page_pool: update document about frag API
Date: Mon, 14 Aug 2023 15:42:34 -0700	[thread overview]
Message-ID: <479a9c1f-9db7-61c8-3485-9b330f777930@infradead.org> (raw)
In-Reply-To: <20230814125643.59334-6-linyunsheng@huawei.com>

Hi--

On 8/14/23 05:56, Yunsheng Lin wrote:
> As more drivers begin to use the frag API, update the
> document about how to decide which API to use for the
> driver author.
> 
> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
> CC: Lorenzo Bianconi <lorenzo@kernel.org>
> CC: Alexander Duyck <alexander.duyck@gmail.com>
> CC: Liang Chen <liangchen.linux@gmail.com>
> CC: Alexander Lobakin <aleksander.lobakin@intel.com>
> ---
>   Documentation/networking/page_pool.rst |  4 +-
>   include/net/page_pool/helpers.h        | 58 +++++++++++++++++++++++---
>   2 files changed, 55 insertions(+), 7 deletions(-)
> 

> diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h
> index b920224f6584..0f1eaa2986f9 100644
> --- a/include/net/page_pool/helpers.h
> +++ b/include/net/page_pool/helpers.h
> @@ -8,13 +8,28 @@
>   /**
>    * DOC: page_pool allocator
>    *
> - * The page_pool allocator is optimized for the XDP mode that
> - * uses one frame per-page, but it can fallback on the
> - * regular page allocator APIs.
> + * The page_pool allocator is optimized for recycling page or page frag used by
> + * skb packet and xdp frame.
>    *
> - * Basic use involves replacing alloc_pages() calls with the
> - * page_pool_alloc_pages() call.  Drivers should use
> - * page_pool_dev_alloc_pages() replacing dev_alloc_pages().
> + * Basic use involves replacing napi_alloc_frag() and alloc_pages() calls with
> + * page_pool_cache_alloc() and page_pool_alloc(), which allocate memory with or
> + * without page splitting depending on the requested memory size.
> + *
> + * If the driver knows that it always requires full pages or its allocates are

                                                              allocations

> + * always smaller than half a page, it can use one of the more specific API
> + * calls:
> + *
> + * 1. page_pool_alloc_pages(): allocate memory without page splitting when
> + * driver knows that the memory it need is always bigger than half of the page
> + * allocated from page pool. There is no cache line dirtying for 'struct page'
> + * when a page is recycled back to the page pool.
> + *
> + * 2. page_pool_alloc_frag(): allocate memory with page splitting when driver
> + * knows that the memory it need is always smaller than or equal to half of the
> + * page allocated from page pool. Page splitting enables memory saving and thus
> + * avoid TLB/cache miss for data access, but there also is some cost to

       avoids

> + * implement page splitting, mainly some cache line dirtying/bouncing for
> + * 'struct page' and atomic operation for page->pp_frag_count.
>    *
>    * API keeps track of in-flight pages, in order to let API user know
>    * when it is safe to free a page_pool object.  Thus, API users
> @@ -100,6 +115,14 @@ static inline struct page *page_pool_alloc_frag(struct page_pool *pool,
>   	return __page_pool_alloc_frag(pool, offset, size, gfp);
>   }
>   
> +/**
> + * page_pool_dev_alloc_frag() - allocate a page frag.
> + * @pool[in]	pool from which to allocate
> + * @offset[out]	offset to the allocated page
> + * @size[in]	requested size

Please use kernel-doc syntax/notation here.

> + *
> + * Get a page frag from the page allocator or page_pool caches.
> + */
>   static inline struct page *page_pool_dev_alloc_frag(struct page_pool *pool,
>   						    unsigned int *offset,
>   						    unsigned int size)
> @@ -143,6 +166,14 @@ static inline struct page *page_pool_alloc(struct page_pool *pool,
>   	return page;
>   }
>   
> +/**
> + * page_pool_dev_alloc() - allocate a page or a page frag.
> + * @pool[in]:		pool from which to allocate
> + * @offset[out]:	offset to the allocated page
> + * @size[in, out]:	in as the requested size, out as the allocated size

and here.

> + *
> + * Get a page or a page frag from the page allocator or page_pool caches.
> + */
>   static inline struct page *page_pool_dev_alloc(struct page_pool *pool,
>   					       unsigned int *offset,
>   					       unsigned int *size)
> @@ -165,6 +196,13 @@ static inline void *page_pool_cache_alloc(struct page_pool *pool,
>   	return page_address(page) + offset;
>   }
>   
> +/**
> + * page_pool_dev_cache_alloc() - allocate a cache.
> + * @pool[in]:		pool from which to allocate
> + * @size[in, out]:	in as the requested size, out as the allocated size

and here.

> + *
> + * Get a cache from the page allocator or page_pool caches.
> + */
>   static inline void *page_pool_dev_cache_alloc(struct page_pool *pool,
>   					      unsigned int *size)
>   {
> @@ -316,6 +354,14 @@ static inline void page_pool_recycle_direct(struct page_pool *pool,
>   	page_pool_put_full_page(pool, page, true);
>   }
>   
> +/**
> + * page_pool_cache_free() - free a cache into the page_pool
> + * @pool[in]:		pool from which cache was allocated
> + * @data[in]:		cache to free
> + * @allow_direct[in]:	freed by the consumer, allow lockless caching

and here.

> + *
> + * Free a cache allocated from page_pool_dev_cache_alloc().
> + */
>   static inline void page_pool_cache_free(struct page_pool *pool, void *data,
>   					bool allow_direct)
>   {

Thanks.


  reply	other threads:[~2023-08-14 22:44 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20230814125643.59334-1-linyunsheng@huawei.com>
2023-08-14 12:56 ` [PATCH net-next v6 5/6] page_pool: update document about frag API Yunsheng Lin
2023-08-14 22:42   ` Randy Dunlap [this message]
2023-08-15 12:24     ` Yunsheng Lin
2023-08-15 15:12       ` Randy Dunlap

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=479a9c1f-9db7-61c8-3485-9b330f777930@infradead.org \
    --to=rdunlap@infradead.org \
    --cc=aleksander.lobakin@intel.com \
    --cc=alexander.duyck@gmail.com \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=corbet@lwn.net \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=hawk@kernel.org \
    --cc=ilias.apalodimas@linaro.org \
    --cc=john.fastabend@gmail.com \
    --cc=kuba@kernel.org \
    --cc=liangchen.linux@gmail.com \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linyunsheng@huawei.com \
    --cc=lorenzo@kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).