From: Ilias Apalodimas <ilias.apalodimas@linaro.org>
To: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: netdev@vger.kernel.org, bpf@vger.kernel.org,
Lorenzo Bianconi <lorenzo@kernel.org>,
mtahhan@redhat.com, mcroce@microsoft.com
Subject: Re: [PATCH net-next] xdp: improve page_pool xdp_return performance
Date: Thu, 22 Sep 2022 12:56:56 +0300 [thread overview]
Message-ID: <YywxaOL0G9QAMzjA@hades> (raw)
In-Reply-To: <166377993287.1737053.10258297257583703949.stgit@firesoul>
Hi Jesper,
On Wed, Sep 21, 2022 at 07:05:32PM +0200, Jesper Dangaard Brouer wrote:
> During LPC2022 I meetup with my page_pool co-maintainer Ilias. When
> discussing page_pool code we realised/remembered certain optimizations
> had not been fully utilised.
>
> Since commit c07aea3ef4d4 ("mm: add a signature in struct page") struct
> page have a direct pointer to the page_pool object this page was
> allocated from.
>
> Thus, with this info it is possible to skip the rhashtable_lookup to
> find the page_pool object in __xdp_return().
>
> The rcu_read_lock can be removed as it was tied to xdp_mem_allocator.
> The page_pool object is still safe to access as it tracks inflight pages
> and (potentially) schedules final release from a work queue.
>
> Created a micro benchmark of XDP redirecting from mlx5 into veth with
> XDP_DROP bpf-prog on the peer veth device. This increased performance
> 6.5% from approx 8.45Mpps to 9Mpps corresponding to using 7 nanosec
> (27 cycles at 3.8GHz) less per packet.
Thanks for the detailed testing
>
> Suggested-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
> ---
> net/core/xdp.c | 10 ++++------
> 1 file changed, 4 insertions(+), 6 deletions(-)
>
> diff --git a/net/core/xdp.c b/net/core/xdp.c
> index 24420209bf0e..844c9d99dc0e 100644
> --- a/net/core/xdp.c
> +++ b/net/core/xdp.c
> @@ -375,19 +375,17 @@ EXPORT_SYMBOL_GPL(xdp_rxq_info_reg_mem_model);
> void __xdp_return(void *data, struct xdp_mem_info *mem, bool napi_direct,
> struct xdp_buff *xdp)
> {
> - struct xdp_mem_allocator *xa;
> struct page *page;
>
> switch (mem->type) {
> case MEM_TYPE_PAGE_POOL:
> - rcu_read_lock();
> - /* mem->id is valid, checked in xdp_rxq_info_reg_mem_model() */
> - xa = rhashtable_lookup(mem_id_ht, &mem->id, mem_id_rht_params);
> page = virt_to_head_page(data);
> if (napi_direct && xdp_return_frame_no_direct())
> napi_direct = false;
> - page_pool_put_full_page(xa->page_pool, page, napi_direct);
> - rcu_read_unlock();
> + /* No need to check ((page->pp_magic & ~0x3UL) == PP_SIGNATURE)
> + * as mem->type knows this a page_pool page
> + */
> + page_pool_put_full_page(page->pp, page, napi_direct);
> break;
> case MEM_TYPE_PAGE_SHARED:
> page_frag_free(data);
>
>
Reviewed-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
next prev parent reply other threads:[~2022-09-22 9:58 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-09-21 17:05 [PATCH net-next] xdp: improve page_pool xdp_return performance Jesper Dangaard Brouer
2022-09-21 22:16 ` Toke Høiland-Jørgensen
2022-09-22 9:56 ` Ilias Apalodimas [this message]
2022-09-26 18:40 ` patchwork-bot+netdevbpf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YywxaOL0G9QAMzjA@hades \
--to=ilias.apalodimas@linaro.org \
--cc=bpf@vger.kernel.org \
--cc=brouer@redhat.com \
--cc=lorenzo@kernel.org \
--cc=mcroce@microsoft.com \
--cc=mtahhan@redhat.com \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).