From: Lorenzo Bianconi <lorenzo.bianconi@redhat.com>
To: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Lorenzo Bianconi <lorenzo@kernel.org>,
netdev@vger.kernel.org, bpf@vger.kernel.org, davem@davemloft.net,
kuba@kernel.org, ilias.apalodimas@linaro.org
Subject: Re: [PATCH v3 net-next 2/5] net: page_pool: add bulk support for ptr_ring
Date: Wed, 4 Nov 2020 15:49:50 +0100 [thread overview]
Message-ID: <20201104144950.GA310976@lore-desk> (raw)
In-Reply-To: <20201104132430.73594bb6@carbon>
[-- Attachment #1: Type: text/plain, Size: 3623 bytes --]
> > diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> > index ef98372facf6..236c5ed3aa66 100644
> > --- a/net/core/page_pool.c
> > +++ b/net/core/page_pool.c
> [...]
> > @@ -408,6 +410,39 @@ void page_pool_put_page(struct page_pool *pool, struct page *page,
> > }
> > EXPORT_SYMBOL(page_pool_put_page);
> >
> > +void page_pool_put_page_bulk(struct page_pool *pool, void **data,
> > + int count)
> > +{
> > + int i, len = 0;
> > +
> > + for (i = 0; i < count; i++) {
> > + struct page *page = virt_to_head_page(data[i]);
> > +
> > + if (likely(page_ref_count(page) == 1 &&
> > + pool_page_reusable(pool, page))) {
> > + if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV)
> > + page_pool_dma_sync_for_device(pool, page, -1);
> > +
> > + /* bulk pages for ptr_ring cache */
> > + data[len++] = page;
> > + } else {
> > + page_pool_release_page(pool, page);
> > + put_page(page);
> > + }
> > + }
> > +
> > + /* Grab the producer spinlock for concurrent access to
> > + * ptr_ring page_pool cache
> > + */
> > + page_pool_ring_lock(pool);
> > + for (i = 0; i < len; i++) {
> > + if (__ptr_ring_produce(&pool->ring, data[i]))
> > + page_pool_return_page(pool, data[i]);
> > + }
> > + page_pool_ring_unlock(pool);
> > +}
> > +EXPORT_SYMBOL(page_pool_put_page_bulk);
>
> I don't like that you are replicating the core logic from
> page_pool_put_page() in this function. This means that we as
> maintainers need to keep both of this places up-to-date.
>
> Let me try to re-implement this, while sharing the refcnt logic:
> (completely untested, not even compiled)
ack, I like the approach below, I will integrate it in v4
>
> ---
> net/core/page_pool.c | 58 +++++++++++++++++++++++++++++++++++++++++++-------
> 2 files changed, 51 insertions(+), 9 deletions(-)
>
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index ef98372facf6..c785e9825a0d 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -362,8 +362,9 @@ static bool pool_page_reusable(struct page_pool *pool, struct page *page)
> * If the page refcnt != 1, then the page will be returned to memory
> * subsystem.
[...]
>
> +/* Caller must not use data area after call, as this function overwrites it */
> +void page_pool_put_page_bulk(struct page_pool *pool, void **data, int count)
> +{
> + int i, len = 0, len2 = 0;
> +
> + for (i = 0; i < count; i++) {
> + struct page *page = virt_to_head_page(data[i]);
> +
> + page = __page_pool_put_page(pool, page, -1 , false);
> +
> + /* Approved for recycling for ptr_ring cache */
> + if (page)
> + data[len++] = page;
> + }
I guess we just return here if len is 0 since we will avoid to grab the
ptr_ring lock, agree?
Regards,
Lorenzo
> +
> + /* Bulk producer into ptr_ring page_pool cache */
> + page_pool_ring_lock(pool);
> + for (i = 0; i < len; i++) {
> + if (__ptr_ring_produce(&pool->ring, data[i]))
> + data[len2++] = data[i];
> + }
> + page_pool_ring_unlock(pool);
> +
> + /* Unlikely case of ptr_ring cache full, free pages outside producer
> + * lock, given put_page() with refcnt==1 can be an expensive operation.
> + */
> + for (i = 0; i < len2; i++) {
> + page_pool_return_page(pool, data[i]);
> + }
> +}
> +EXPORT_SYMBOL(page_pool_put_page_bulk);
> +
> static void page_pool_empty_ring(struct page_pool *pool)
> {
> struct page *page;
>
>
> --
> Best regards,
> Jesper Dangaard Brouer
> MSc.CS, Principal Kernel Engineer at Red Hat
> LinkedIn: http://www.linkedin.com/in/brouer
>
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]
next prev parent reply other threads:[~2020-11-04 14:50 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-11-04 10:22 [PATCH v3 net-next 0/5] xdp: introduce bulking for page_pool tx return path Lorenzo Bianconi
2020-11-04 10:22 ` [PATCH v3 net-next 1/5] net: xdp: introduce bulking for xdp " Lorenzo Bianconi
2020-11-04 11:11 ` Jesper Dangaard Brouer
2020-11-04 11:19 ` Lorenzo Bianconi
2020-11-04 12:28 ` Jesper Dangaard Brouer
2020-11-04 12:53 ` Lorenzo Bianconi
2020-11-04 10:22 ` [PATCH v3 net-next 2/5] net: page_pool: add bulk support for ptr_ring Lorenzo Bianconi
2020-11-04 12:24 ` Jesper Dangaard Brouer
2020-11-04 14:49 ` Lorenzo Bianconi [this message]
2020-11-04 10:22 ` [PATCH v3 net-next 3/5] net: mvneta: add xdp tx return bulking support Lorenzo Bianconi
2020-11-04 10:22 ` [PATCH v3 net-next 4/5] net: mvpp2: " Lorenzo Bianconi
2020-11-04 10:22 ` [PATCH v3 net-next 5/5] net: mlx5: " Lorenzo Bianconi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20201104144950.GA310976@lore-desk \
--to=lorenzo.bianconi@redhat.com \
--cc=bpf@vger.kernel.org \
--cc=brouer@redhat.com \
--cc=davem@davemloft.net \
--cc=ilias.apalodimas@linaro.org \
--cc=kuba@kernel.org \
--cc=lorenzo@kernel.org \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).