From: Ilias Apalodimas <ilias.apalodimas@linaro.org>
To: Randy Dunlap <rdunlap@infradead.org>
Cc: brouer@redhat.com, davem@davemloft.net, netdev@vger.kernel.org,
lorenzo@kernel.org, toke@redhat.com
Subject: Re: [PATCH net-next] net: page_pool: Add documentation for page_pool API
Date: Fri, 21 Feb 2020 09:12:55 +0200 [thread overview]
Message-ID: <20200221071255.GA863284@apalos.home> (raw)
In-Reply-To: <0bfe362b-276d-21ad-24b9-67813c0cd50a@infradead.org>
Hi Randy,
On Thu, Feb 20, 2020 at 04:14:00PM -0800, Randy Dunlap wrote:
> Hi again Ilias,
>
> On 2/20/20 10:25 AM, Ilias Apalodimas wrote:
> > Add documentation explaining the basic functionality and design
> > principles of the API
> >
> > Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
> > ---
> > Documentation/networking/page_pool.rst | 159 +++++++++++++++++++++++++
> > 1 file changed, 159 insertions(+)
> > create mode 100644 Documentation/networking/page_pool.rst
> >
> > diff --git a/Documentation/networking/page_pool.rst b/Documentation/networking/page_pool.rst
> > new file mode 100644
> > index 000000000000..098d339ef272
> > --- /dev/null
> > +++ b/Documentation/networking/page_pool.rst
> > @@ -0,0 +1,159 @@
> > +.. SPDX-License-Identifier: GPL-2.0
> > +
> > +=============
> > +Page Pool API
> > +=============
> > +
> > +The page_pool allocator is optimized for the XDP mode that uses one frame
> > +per-page, but it can fallback on the regular page allocator APIs.
> > +
> > +Basic use involve replacing alloc_pages() calls with the
>
> involves
>
Ok
> > +page_pool_alloc_pages() call. Drivers should use page_pool_dev_alloc_pages()
> > +replacing dev_alloc_pages().
> > +
> ...
>
> > +
> > +Architecture overview
> > +=====================
> > +
> > +.. code-block:: none
> > +
> ...
>
> > +
> > +API interface
> > +=============
> > +The number of pools created **must** match the number of hardware queues
> > +unless hardware restrictions make that impossible. This would otherwise beat the
> > +purpose of page pool, which is allocate pages fast from cache without locking.
> > +This lockless guarantee naturally comes from running under a NAPI softirq.
> > +The protection doesn't strictly have to be NAPI, any guarantee that allocating
> > +a page will cause no race conditions is enough.
> > +
> > +* page_pool_create(): Create a pool.
> > + * flags: PP_FLAG_DMA_MAP, PP_FLAG_DMA_SYNC_DEV
> > + * order: order^n pages on allocation
>
> what is "n" above?
> My quick reading of mm/page_alloc.c suggests that order is the power of 2
> that should be used for the memory allocation... ???
Yes this must change to 2^order
>
> > + * pool_size: size of the ptr_ring
> > + * nid: preferred NUMA node for allocation
> > + * dev: struct device. Used on DMA operations
> > + * dma_dir: DMA direction
> > + * max_len: max DMA sync memory size
> > + * offset: DMA address offset
> > +
> ...
>
> > +
> > +Coding examples
> > +===============
> > +
> > +Registration
> > +------------
> > +
> > +.. code-block:: c
> > +
> > + /* Page pool registration */
> > + struct page_pool_params pp_params = { 0 };
> > + struct xdp_rxq_info xdp_rxq;
> > + int err;
> > +
> > + pp_params.order = 0;
>
> so 0^n?
See above!
>
> > + /* internal DMA mapping in page_pool */
> > + pp_params.flags = PP_FLAG_DMA_MAP;
> > + pp_params.pool_size = DESC_NUM;
> > + pp_params.nid = NUMA_NO_NODE;
> > + pp_params.dev = priv->dev;
> > + pp_params.dma_dir = xdp_prog ? DMA_BIDIRECTIONAL : DMA_FROM_DEVICE;
> > + page_pool = page_pool_create(&pp_params);
> > +
> > + err = xdp_rxq_info_reg(&xdp_rxq, ndev, 0);
> > + if (err)
> > + goto err_out;
> > +
> > + err = xdp_rxq_info_reg_mem_model(&xdp_rxq, MEM_TYPE_PAGE_POOL, page_pool);
> > + if (err)
> > + goto err_out;
> > +
> > +NAPI poller
> > +-----------
>
> thanks.
Thanks again for taking the time
> --
> ~Randy
>
Cheers
/Ilias
prev parent reply other threads:[~2020-02-21 7:13 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-02-20 18:25 [PATCH net-next] net: page_pool: Add documentation for page_pool API Ilias Apalodimas
2020-02-21 0:14 ` Randy Dunlap
2020-02-21 7:12 ` Ilias Apalodimas [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200221071255.GA863284@apalos.home \
--to=ilias.apalodimas@linaro.org \
--cc=brouer@redhat.com \
--cc=davem@davemloft.net \
--cc=lorenzo@kernel.org \
--cc=netdev@vger.kernel.org \
--cc=rdunlap@infradead.org \
--cc=toke@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox