netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jakub Kicinski <kuba@kernel.org>
To: Jesper Dangaard Brouer <jbrouer@redhat.com>
Cc: davem@davemloft.net, brouer@redhat.com, netdev@vger.kernel.org,
	edumazet@google.com, pabeni@redhat.com, hawk@kernel.org,
	ilias.apalodimas@linaro.org
Subject: Re: [RFC net-next 1/2] page_pool: allow caching from safely localized NAPI
Date: Fri, 31 Mar 2023 15:17:12 -0700	[thread overview]
Message-ID: <20230331151712.2ccd8317@kernel.org> (raw)
In-Reply-To: <20230331120643.09ce0e59@kernel.org>

On Fri, 31 Mar 2023 12:06:43 -0700 Jakub Kicinski wrote:
> > Make sense, but do read the comment above struct pp_alloc_cache.
> > The sizing of pp_alloc_cache is important for this trick/heuristic to
> > work, meaning the pp_alloc_cache have enough room.
> > It is definitely on purpose that pp_alloc_cache have 128 elements and is
> > only refill'ed with 64 elements, which leaves room for this kind of
> > trick.  But if Eric's deferred skb freeing have more than 64 pages to
> > free, then we will likely fallback to ptr_ring recycling.
> > 
> > Code wise, I suggest that you/we change page_pool_put_page_bulk() to
> > have a variant that 'allow_direct' (looking at code below, you might
> > already do this as this patch over-steer 'allow_direct').  Using the
> > bulk API, would then bulk into ptr_ring in the cases we cannot use
> > direct cache.  
> 
> Interesting point, let me re-run some tests with the statistics enabled.
> For a simple stream test I think it may just be too steady to trigger
> over/underflow. Each skb will carry at most 18 pages, and driver should
> only produce 64 packets / consume 64 pages. Each NAPI cycle will start
> by flushing the deferred free. So unless there is a hiccup either at the
> app or NAPI side - the flows of pages in each direction should be steady
> enough to do well with just 128 cache entries. Let me get the data and
> report back.

I patched the driver a bit to use page pool for HW-GRO.
Below are recycle stats with HW-GRO and with SW GRO + XDP_PASS (packet per page).

		HW-GRO				page=page
		before		after		before		after
recycle:
cached:			0	138669686		0	150197505
cache_full:		0	   223391		0	    74582
ring:		138551933         9997191	149299454		0
ring_full: 		0             488	     3154	   127590
released_refcnt:	0		0		0		0

alloc:
fast:		136491361	148615710	146969587	150322859
slow:		     1772	     1799	      144	      105
slow_high_order:	0		0		0		0
empty:		     1772	     1799	      144	      105
refill:		  2165245	   156302	  2332880	     2128
waive:			0		0		0		0

Which seems to confirm that for this trivial test the cache sizing is
good enough, and I won't see any benefit from batching (as cache is only
full respectively 0.16% and 0.05% of the time).

  reply	other threads:[~2023-03-31 22:17 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-03-31  4:39 [RFC net-next 1/2] page_pool: allow caching from safely localized NAPI Jakub Kicinski
2023-03-31  4:39 ` [RFC net-next 2/2] bnxt: hook NAPIs to page pools Jakub Kicinski
2023-03-31  5:15 ` [RFC net-next 1/2] page_pool: allow caching from safely localized NAPI Jakub Kicinski
2023-03-31  9:31 ` Jesper Dangaard Brouer
2023-03-31 19:06   ` Jakub Kicinski
2023-03-31 22:17     ` Jakub Kicinski [this message]
2023-04-03  9:16 ` Ilias Apalodimas
2023-04-03 15:05   ` Jakub Kicinski
2023-04-03 15:30     ` Ilias Apalodimas
2023-04-03 17:12       ` Jakub Kicinski
2023-04-05 17:04         ` Dragos Tatulea
2023-04-04  0:53 ` Yunsheng Lin
2023-04-04  1:45   ` Jakub Kicinski
2023-04-04  3:18     ` Yunsheng Lin
2023-04-04  4:21   ` Eric Dumazet
2023-04-04 10:50     ` Yunsheng Lin
2023-04-05 17:11       ` Eric Dumazet

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230331151712.2ccd8317@kernel.org \
    --to=kuba@kernel.org \
    --cc=brouer@redhat.com \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=hawk@kernel.org \
    --cc=ilias.apalodimas@linaro.org \
    --cc=jbrouer@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).