netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Simon Horman <horms@kernel.org>
To: Yunsheng Lin <linyunsheng@huawei.com>
Cc: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com,
	liuyonglong@huawei.com, fanghaiqing@huawei.com,
	zhangkun09@huawei.com, Robin Murphy <robin.murphy@arm.com>,
	Alexander Duyck <alexander.duyck@gmail.com>,
	IOMMU <iommu@lists.linux.dev>,
	Jesper Dangaard Brouer <hawk@kernel.org>,
	Ilias Apalodimas <ilias.apalodimas@linaro.org>,
	Eric Dumazet <edumazet@google.com>,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH net-next v6 8/8] page_pool: use list instead of array for alloc cache
Date: Tue, 7 Jan 2025 12:03:44 +0000	[thread overview]
Message-ID: <20250107120344.GI33144@kernel.org> (raw)
In-Reply-To: <20250106130116.457938-9-linyunsheng@huawei.com>

On Mon, Jan 06, 2025 at 09:01:16PM +0800, Yunsheng Lin wrote:
> As the alloc cache is always protected by NAPI context
> protection, use encoded_next as a pointer to a next item
> to avoid the using the array.
> 
> Testing shows there is about 3ns improvement for the
> performance of 'time_bench_page_pool01_fast_path' test
> case.
> 
> CC: Robin Murphy <robin.murphy@arm.com>
> CC: Alexander Duyck <alexander.duyck@gmail.com>
> CC: IOMMU <iommu@lists.linux.dev>
> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>

...

> diff --git a/net/core/page_pool.c b/net/core/page_pool.c

...

> @@ -677,10 +698,12 @@ static void __page_pool_return_page(struct page_pool *pool, netmem_ref netmem,
>  
>  static noinline netmem_ref page_pool_refill_alloc_cache(struct page_pool *pool)
>  {
> -	struct page_pool_item *refill;
> +	struct page_pool_item *refill, *alloc, *curr;
>  	netmem_ref netmem;
>  	int pref_nid; /* preferred NUMA node */
>  
> +	DEBUG_NET_WARN_ON_ONCE(pool->alloc.count || pool->alloc.list);
> +
>  	/* Quicker fallback, avoid locks when ring is empty */
>  	refill = pool->alloc.refill;
>  	if (unlikely(!refill && !READ_ONCE(pool->ring.list))) {
> @@ -698,6 +721,7 @@ static noinline netmem_ref page_pool_refill_alloc_cache(struct page_pool *pool)
>  	pref_nid = numa_mem_id(); /* will be zero like page_to_nid() */
>  #endif
>  
> +	alloc = NULL;
>  	/* Refill alloc array, but only if NUMA match */
>  	do {
>  		if (unlikely(!refill)) {
> @@ -706,10 +730,13 @@ static noinline netmem_ref page_pool_refill_alloc_cache(struct page_pool *pool)
>  				break;
>  		}
>  
> +		curr = refill;
>  		netmem = refill->pp_netmem;
>  		refill = page_pool_item_get_next(refill);
>  		if (likely(netmem_is_pref_nid(netmem, pref_nid))) {
> -			pool->alloc.cache[pool->alloc.count++] = netmem;
> +			page_pool_item_set_next(curr, alloc);
> +			pool->alloc.count++;
> +			alloc = curr;
>  		} else {
>  			/* NUMA mismatch;
>  			 * (1) release 1 page to page-allocator and
> @@ -729,7 +756,8 @@ static noinline netmem_ref page_pool_refill_alloc_cache(struct page_pool *pool)
>  	/* Return last page */
>  	if (likely(pool->alloc.count > 0)) {
>  		atomic_sub(pool->alloc.count, &pool->ring.count);
> -		netmem = pool->alloc.cache[--pool->alloc.count];
> +		pool->alloc.list = page_pool_item_get_next(alloc);
> +		pool->alloc.count--;
>  		alloc_stat_inc(pool, refill);
>  	}
>  

Hi Yunsheng Lin,

The following line of the code looks like this:

	return netmem;

And, with this patch applied, Smatch warns that netmem may be used
uninitialised here. I assume this is because it is no longer conditionally
initialised above.

...

  reply	other threads:[~2025-01-07 12:03 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-01-06 13:01 [PATCH net-next v6 0/8] fix two bugs related to page_pool Yunsheng Lin
2025-01-06 13:01 ` [PATCH net-next v6 1/8] page_pool: introduce page_pool_get_pp() API Yunsheng Lin
2025-01-07 14:52   ` Jesper Dangaard Brouer
2025-01-08  9:37     ` Yunsheng Lin
2025-01-06 13:01 ` [PATCH net-next v6 2/8] page_pool: fix timing for checking and disabling napi_local Yunsheng Lin
2025-01-06 13:01 ` [PATCH net-next v6 3/8] page_pool: fix IOMMU crash when driver has already unbound Yunsheng Lin
2025-01-06 13:01 ` [PATCH net-next v6 4/8] page_pool: support unlimited number of inflight pages Yunsheng Lin
2025-01-06 13:01 ` [PATCH net-next v6 5/8] page_pool: skip dma sync operation for " Yunsheng Lin
2025-01-06 13:01 ` [PATCH net-next v6 6/8] page_pool: use list instead of ptr_ring for ring cache Yunsheng Lin
2025-01-06 13:01 ` [PATCH net-next v6 7/8] page_pool: batch refilling pages to reduce atomic operation Yunsheng Lin
2025-01-06 13:01 ` [PATCH net-next v6 8/8] page_pool: use list instead of array for alloc cache Yunsheng Lin
2025-01-07 12:03   ` Simon Horman [this message]
2025-01-07 12:55     ` Yunsheng Lin
2025-01-06 23:51 ` [PATCH net-next v6 0/8] fix two bugs related to page_pool Jakub Kicinski
2025-01-07 12:54   ` Yunsheng Lin
2025-01-07 14:26 ` Jesper Dangaard Brouer
2025-01-08  9:36   ` Yunsheng Lin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250107120344.GI33144@kernel.org \
    --to=horms@kernel.org \
    --cc=alexander.duyck@gmail.com \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=fanghaiqing@huawei.com \
    --cc=hawk@kernel.org \
    --cc=ilias.apalodimas@linaro.org \
    --cc=iommu@lists.linux.dev \
    --cc=kuba@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linyunsheng@huawei.com \
    --cc=liuyonglong@huawei.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=robin.murphy@arm.com \
    --cc=zhangkun09@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).