netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Lorenzo Bianconi <lorenzo@kernel.org>
To: Alexander Lobakin <aleksander.lobakin@intel.com>
Cc: netdev@vger.kernel.org, lorenzo.bianconi@redhat.com,
	kuba@kernel.org, davem@davemloft.net, edumazet@google.com,
	pabeni@redhat.com, hawk@kernel.org, ilias.apalodimas@linaro.org,
	linyunsheng@huawei.com, toke@redhat.com
Subject: Re: [RFC net-next] net: page_pool: fix recycle stats for percpu page_pool allocator
Date: Thu, 15 Feb 2024 14:51:46 +0100	[thread overview]
Message-ID: <Zc4W8iNOgqI8xrCT@lore-desk> (raw)
In-Reply-To: <bff45ab9-2818-4b37-837e-f18ffcab8f47@intel.com>

[-- Attachment #1: Type: text/plain, Size: 2749 bytes --]

> From: Lorenzo Bianconi <lorenzo@kernel.org>
> Date: Wed, 14 Feb 2024 19:08:40 +0100
> 
> > Use global page_pool_recycle_stats percpu counter for percpu page_pool
> > allocator.
> > 
> > Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
> > ---
> >  net/core/page_pool.c | 18 +++++++++++++-----
> >  1 file changed, 13 insertions(+), 5 deletions(-)
> > 
> > diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> > index 6e0753e6a95b..1bb83b6e7a61 100644
> > --- a/net/core/page_pool.c
> > +++ b/net/core/page_pool.c
> > @@ -31,6 +31,8 @@
> >  #define BIAS_MAX	(LONG_MAX >> 1)
> >  
> >  #ifdef CONFIG_PAGE_POOL_STATS
> > +static DEFINE_PER_CPU(struct page_pool_recycle_stats, pp_recycle_stats);
> > +
> >  /* alloc_stat_inc is intended to be used in softirq context */
> >  #define alloc_stat_inc(pool, __stat)	(pool->alloc_stats.__stat++)
> >  /* recycle_stat_inc is safe to use when preemption is possible. */
> > @@ -220,14 +222,19 @@ static int page_pool_init(struct page_pool *pool,
> >  	pool->has_init_callback = !!pool->slow.init_callback;
> >  
> >  #ifdef CONFIG_PAGE_POOL_STATS
> > -	pool->recycle_stats = alloc_percpu(struct page_pool_recycle_stats);
> > -	if (!pool->recycle_stats)
> > -		return -ENOMEM;
> > +	if (cpuid < 0) {
> 
> TBH I don't like the idea of assuming that only system page_pools might
> be created with cpuid >= 0.
> For example, if I have an Rx queue always pinned to one CPU, I might
> want to create a PP for this queue with the cpuid set already to save
> some cycles when recycling. We might also reuse cpuid later for some
> more optimizations or features.
> 
> Maybe add a new PP_FLAG indicating that system percpu PP stats should be
> used?

Ack, I like the idea. What about creating a flag to indicate this is a percpu
page_pool instead of relying on cpuid value? Maybe it can be useful in the
future, what do you think?

Regards,
Lorenzo

> 
> > +		pool->recycle_stats = alloc_percpu(struct page_pool_recycle_stats);
> > +		if (!pool->recycle_stats)
> > +			return -ENOMEM;
> > +	} else {
> > +		pool->recycle_stats = &pp_recycle_stats;
> > +	}
> >  #endif
> >  
> >  	if (ptr_ring_init(&pool->ring, ring_qsize, GFP_KERNEL) < 0) {
> >  #ifdef CONFIG_PAGE_POOL_STATS
> > -		free_percpu(pool->recycle_stats);
> > +		if (cpuid < 0)
> > +			free_percpu(pool->recycle_stats);
> >  #endif
> >  		return -ENOMEM;
> >  	}
> > @@ -251,7 +258,8 @@ static void page_pool_uninit(struct page_pool *pool)
> >  		put_device(pool->p.dev);
> >  
> >  #ifdef CONFIG_PAGE_POOL_STATS
> > -	free_percpu(pool->recycle_stats);
> > +	if (pool->cpuid < 0)
> > +		free_percpu(pool->recycle_stats);
> >  #endif
> >  }
> >  
> 
> Thanks,
> Olek

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

  reply	other threads:[~2024-02-15 13:51 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-02-14 18:08 [RFC net-next] net: page_pool: fix recycle stats for percpu page_pool allocator Lorenzo Bianconi
2024-02-14 21:42 ` Toke Høiland-Jørgensen
2024-02-14 22:46   ` Lorenzo Bianconi
2024-02-15 13:41 ` Alexander Lobakin
2024-02-15 13:51   ` Lorenzo Bianconi [this message]
2024-02-15 15:04   ` Jakub Kicinski
2024-02-15 15:31     ` Lorenzo Bianconi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Zc4W8iNOgqI8xrCT@lore-desk \
    --to=lorenzo@kernel.org \
    --cc=aleksander.lobakin@intel.com \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=hawk@kernel.org \
    --cc=ilias.apalodimas@linaro.org \
    --cc=kuba@kernel.org \
    --cc=linyunsheng@huawei.com \
    --cc=lorenzo.bianconi@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=toke@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).