From: Jakub Kicinski <kuba@kernel.org>
To: David Wei <dw@davidwei.uk>
Cc: io-uring@vger.kernel.org, netdev@vger.kernel.org,
Jens Axboe <axboe@kernel.dk>,
Pavel Begunkov <asml.silence@gmail.com>,
Paolo Abeni <pabeni@redhat.com>,
"David S. Miller" <davem@davemloft.net>,
Eric Dumazet <edumazet@google.com>,
Jesper Dangaard Brouer <hawk@kernel.org>,
David Ahern <dsahern@kernel.org>,
Mina Almasry <almasrymina@google.com>,
Stanislav Fomichev <stfomichev@gmail.com>,
Joe Damato <jdamato@fastly.com>,
Pedro Tammela <pctammela@mojatatu.com>
Subject: Re: [PATCH net-next v8 06/17] net: page pool: add helper creating area from pages
Date: Mon, 9 Dec 2024 19:29:59 -0800 [thread overview]
Message-ID: <20241209192959.42425232@kernel.org> (raw)
In-Reply-To: <20241204172204.4180482-7-dw@davidwei.uk>
On Wed, 4 Dec 2024 09:21:45 -0800 David Wei wrote:
> From: Pavel Begunkov <asml.silence@gmail.com>
>
> Add a helper that takes an array of pages and initialises passed in
> memory provider's area with them, where each net_iov takes one page.
> It's also responsible for setting up dma mappings.
>
> We keep it in page_pool.c not to leak netmem details to outside
> providers like io_uring, which don't have access to netmem_priv.h
> and other private helpers.
User space will likely give us hugepages. Feels a bit wasteful to map
and manage them 4k at a time. But okay, we can optimize this later.
> diff --git a/include/net/page_pool/memory_provider.h b/include/net/page_pool/memory_provider.h
> new file mode 100644
> index 000000000000..83d7eec0058d
> --- /dev/null
> +++ b/include/net/page_pool/memory_provider.h
> @@ -0,0 +1,10 @@
nit: missing SPDX
> +#ifndef _NET_PAGE_POOL_MEMORY_PROVIDER_H
> +#define _NET_PAGE_POOL_MEMORY_PROVIDER_H
> +
> +int page_pool_mp_init_paged_area(struct page_pool *pool,
> + struct net_iov_area *area,
> + struct page **pages);
> +void page_pool_mp_release_area(struct page_pool *pool,
> + struct net_iov_area *area);
> +
> +#endif
> +static void page_pool_release_page_dma(struct page_pool *pool,
> + netmem_ref netmem)
> +{
> + __page_pool_release_page_dma(pool, netmem);
I'm guessing this is to save text? Because
__page_pool_release_page_dma() is always_inline?
Maybe add a comment?
> +}
> +
> +int page_pool_mp_init_paged_area(struct page_pool *pool,
> + struct net_iov_area *area,
> + struct page **pages)
> +{
> + struct net_iov *niov;
> + netmem_ref netmem;
> + int i, ret = 0;
> +
> + if (!pool->dma_map)
> + return -EOPNOTSUPP;
> +
> + for (i = 0; i < area->num_niovs; i++) {
> + niov = &area->niovs[i];
> + netmem = net_iov_to_netmem(niov);
> +
> + page_pool_set_pp_info(pool, netmem);
Maybe move setting pp down, after we successfully mapped. Technically
it's not a bug to leave it set on netmem, but it would be on a page
struct.
> + if (!page_pool_dma_map_page(pool, netmem, pages[i])) {
> + ret = -EINVAL;
> + goto err_unmap_dma;
> + }
> + }
> + return 0;
> +
> +err_unmap_dma:
> + while (i--) {
> + netmem = net_iov_to_netmem(&area->niovs[i]);
> + page_pool_release_page_dma(pool, netmem);
> + }
> + return ret;
> +}
> +
> +void page_pool_mp_release_area(struct page_pool *pool,
> + struct net_iov_area *area)
> +{
> + int i;
> +
> + if (!pool->dma_map)
> + return;
> +
> + for (i = 0; i < area->num_niovs; i++) {
> + struct net_iov *niov = &area->niovs[i];
> +
> + page_pool_release_page_dma(pool, net_iov_to_netmem(niov));
> + }
> +}
next prev parent reply other threads:[~2024-12-10 3:30 UTC|newest]
Thread overview: 57+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-12-04 17:21 [PATCH net-next v8 00/17] io_uring zero copy rx David Wei
2024-12-04 17:21 ` [PATCH net-next v8 01/17] net: prefix devmem specific helpers David Wei
2024-12-04 21:00 ` Mina Almasry
2024-12-04 21:24 ` David Wei
2024-12-04 17:21 ` [PATCH net-next v8 02/17] net: generalise net_iov chunk owners David Wei
2024-12-09 17:01 ` Mina Almasry
2024-12-04 17:21 ` [PATCH net-next v8 03/17] net: page_pool: create hooks for custom page providers David Wei
2024-12-10 3:02 ` Jakub Kicinski
2024-12-10 16:31 ` David Wei
2024-12-04 17:21 ` [PATCH net-next v8 04/17] net: prepare for non devmem TCP memory providers David Wei
2024-12-09 17:04 ` Mina Almasry
2024-12-10 3:15 ` Jakub Kicinski
2024-12-10 3:53 ` Pavel Begunkov
2024-12-10 4:06 ` Jakub Kicinski
2024-12-10 4:15 ` Pavel Begunkov
2024-12-04 17:21 ` [PATCH net-next v8 05/17] net: page_pool: add ->scrub mem provider callback David Wei
2024-12-09 17:08 ` Mina Almasry
2024-12-09 17:24 ` Pavel Begunkov
2024-12-04 17:21 ` [PATCH net-next v8 06/17] net: page pool: add helper creating area from pages David Wei
2024-12-10 3:29 ` Jakub Kicinski [this message]
2024-12-10 3:58 ` Pavel Begunkov
2024-12-04 17:21 ` [PATCH net-next v8 07/17] net: page_pool: introduce page_pool_mp_return_in_cache David Wei
2024-12-09 17:15 ` Mina Almasry
2024-12-09 17:28 ` Pavel Begunkov
2024-12-10 3:40 ` Jakub Kicinski
2024-12-10 4:31 ` Pavel Begunkov
2024-12-11 0:06 ` Jakub Kicinski
2024-12-04 17:21 ` [PATCH net-next v8 08/17] net: add helper executing custom callback from napi David Wei
2024-12-10 3:44 ` Jakub Kicinski
2024-12-10 4:11 ` Pavel Begunkov
2024-12-04 17:21 ` [PATCH net-next v8 09/17] io_uring/zcrx: add interface queue and refill queue David Wei
2024-12-06 16:05 ` Simon Horman
2024-12-09 23:50 ` David Wei
2024-12-10 3:49 ` Jakub Kicinski
2024-12-10 4:03 ` Pavel Begunkov
2024-12-10 4:07 ` Jakub Kicinski
2024-12-04 17:21 ` [PATCH net-next v8 10/17] io_uring/zcrx: add io_zcrx_area David Wei
2024-12-04 17:21 ` [PATCH net-next v8 11/17] io_uring/zcrx: implement zerocopy receive pp memory provider David Wei
2024-12-10 4:01 ` Jakub Kicinski
2024-12-10 4:45 ` Pavel Begunkov
2024-12-10 4:50 ` Pavel Begunkov
2024-12-11 0:24 ` Jakub Kicinski
2024-12-11 14:42 ` Pavel Begunkov
2024-12-12 1:38 ` Jakub Kicinski
2024-12-12 13:42 ` Pavel Begunkov
2024-12-10 16:55 ` Mina Almasry
2024-12-04 17:21 ` [PATCH net-next v8 12/17] io_uring/zcrx: add io_recvzc request David Wei
2024-12-04 17:21 ` [PATCH net-next v8 13/17] io_uring/zcrx: set pp memory provider for an rx queue David Wei
2024-12-04 17:21 ` [PATCH net-next v8 14/17] io_uring/zcrx: add copy fallback David Wei
2024-12-04 17:21 ` [PATCH net-next v8 15/17] io_uring/zcrx: throttle receive requests David Wei
2024-12-04 17:21 ` [PATCH net-next v8 16/17] net: add documentation for io_uring zcrx David Wei
2024-12-09 17:51 ` Mina Almasry
2024-12-10 16:53 ` David Wei
2024-12-09 17:52 ` Mina Almasry
2024-12-10 16:54 ` David Wei
2024-12-04 17:21 ` [PATCH net-next v8 17/17] io_uring/zcrx: add selftest David Wei
2024-12-04 18:59 ` [PATCH net-next v8 00/17] io_uring zero copy rx Pavel Begunkov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20241209192959.42425232@kernel.org \
--to=kuba@kernel.org \
--cc=almasrymina@google.com \
--cc=asml.silence@gmail.com \
--cc=axboe@kernel.dk \
--cc=davem@davemloft.net \
--cc=dsahern@kernel.org \
--cc=dw@davidwei.uk \
--cc=edumazet@google.com \
--cc=hawk@kernel.org \
--cc=io-uring@vger.kernel.org \
--cc=jdamato@fastly.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=pctammela@mojatatu.com \
--cc=stfomichev@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).