netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Paolo Abeni <pabeni@redhat.com>
To: Yunsheng Lin <linyunsheng@huawei.com>,
	davem@davemloft.net, kuba@kernel.org
Cc: liuyonglong@huawei.com, fanghaiqing@huawei.com,
	zhangkun09@huawei.com,
	Alexander Lobakin <aleksander.lobakin@intel.com>,
	Jesper Dangaard Brouer <hawk@kernel.org>,
	Ilias Apalodimas <ilias.apalodimas@linaro.org>,
	Eric Dumazet <edumazet@google.com>,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH net v2 1/2] page_pool: fix timing for checking and disabling napi_local
Date: Tue, 1 Oct 2024 13:30:19 +0200	[thread overview]
Message-ID: <d123d288-4215-4a8c-9689-bbfe24c24b08@redhat.com> (raw)
In-Reply-To: <20240925075707.3970187-2-linyunsheng@huawei.com>

On 9/25/24 09:57, Yunsheng Lin wrote:
> page_pool page may be freed from skb_defer_free_flush() to
> softirq context, it may cause concurrent access problem for
> pool->alloc cache due to the below time window, as below,
> both CPU0 and CPU1 may access the pool->alloc cache
> concurrently in page_pool_empty_alloc_cache_once() and
> page_pool_recycle_in_cache():
> 
>            CPU 0                           CPU1
>      page_pool_destroy()          skb_defer_free_flush()
>             .                               .
>             .                   page_pool_put_unrefed_page()
>             .                               .
>             .               allow_direct = page_pool_napi_local()
>             .                               .
> page_pool_disable_direct_recycling()       .
>             .                               .
> page_pool_empty_alloc_cache_once() page_pool_recycle_in_cache()
> 
> Use rcu mechanism to avoid the above concurrent access problem.
> 
> Note, the above was found during code reviewing on how to fix
> the problem in [1].
> 
> 1. https://lore.kernel.org/lkml/8067f204-1380-4d37-8ffd-007fc6f26738@kernel.org/T/
> 
> Fixes: dd64b232deb8 ("page_pool: unlink from napi during destroy")
> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
> CC: Alexander Lobakin <aleksander.lobakin@intel.com>
> ---
>   net/core/page_pool.c | 31 ++++++++++++++++++++++++++++---
>   1 file changed, 28 insertions(+), 3 deletions(-)
> 
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index a813d30d2135..bec6e717cd22 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -818,8 +818,17 @@ static bool page_pool_napi_local(const struct page_pool *pool)
>   void page_pool_put_unrefed_netmem(struct page_pool *pool, netmem_ref netmem,
>   				  unsigned int dma_sync_size, bool allow_direct)
>   {
> -	if (!allow_direct)
> +	bool allow_direct_orig = allow_direct;
> +
> +	/* page_pool_put_unrefed_netmem() is not supposed to be called with
> +	 * allow_direct being true after page_pool_destroy() is called, so
> +	 * the allow_direct being true case doesn't need synchronization.
> +	 */
> +	DEBUG_NET_WARN_ON_ONCE(allow_direct && pool->destroy_cnt);
> +	if (!allow_direct_orig) {
> +		rcu_read_lock();
>   		allow_direct = page_pool_napi_local(pool);
> +	}
>   
>   	netmem =
>   		__page_pool_put_page(pool, netmem, dma_sync_size, allow_direct);
> @@ -828,6 +837,9 @@ void page_pool_put_unrefed_netmem(struct page_pool *pool, netmem_ref netmem,
>   		recycle_stat_inc(pool, ring_full);
>   		page_pool_return_page(pool, netmem);
>   	}
> +
> +	if (!allow_direct_orig)
> +		rcu_read_unlock();

What about always acquiring the rcu lock? would that impact performances 
negatively?

If not, I think it's preferable, as it would make static checker happy.

>   }
>   EXPORT_SYMBOL(page_pool_put_unrefed_netmem);
>   

[...]

> @@ -1121,6 +1140,12 @@ void page_pool_destroy(struct page_pool *pool)
>   		return;
>   
>   	page_pool_disable_direct_recycling(pool);
> +
> +	/* Wait for the freeing side see the disabling direct recycling setting
> +	 * to avoid the concurrent access to the pool->alloc cache.
> +	 */
> +	synchronize_rcu();

When turning on/off a device with a lot of queues, the above could 
introduce a lot of long waits under the RTNL lock, right?

What about moving the trailing of this function in a separate helper and 
use call_rcu() instead?

Thanks!

Paolo


> +
>   	page_pool_free_frag(pool);
>   
>   	if (!page_pool_release(pool))


  parent reply	other threads:[~2024-10-01 11:30 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-09-25  7:57 [PATCH net v2 0/2] fix two bugs related to page_pool Yunsheng Lin
2024-09-25  7:57 ` [PATCH net v2 1/2] page_pool: fix timing for checking and disabling napi_local Yunsheng Lin
2024-09-26 20:06   ` Joe Damato
2024-09-27  3:58     ` Yunsheng Lin
2024-10-01 11:30   ` Paolo Abeni [this message]
2024-10-02  1:52     ` Yunsheng Lin
2024-10-09  0:40   ` Jakub Kicinski
2024-10-09  3:33     ` Yunsheng Lin
2024-10-09 15:13       ` Jakub Kicinski
2024-10-10  9:14         ` Yunsheng Lin
2024-09-25  7:57 ` [PATCH net v2 2/2] page_pool: fix IOMMU crash when driver has already unbound Yunsheng Lin
2024-09-26 18:15   ` Mina Almasry
2024-09-27  3:57     ` Yunsheng Lin
2024-09-27  5:54       ` Mina Almasry
2024-09-27  7:25         ` Yunsheng Lin
2024-09-27  9:21       ` Ilias Apalodimas
2024-09-27  9:49         ` Yunsheng Lin
2024-09-27  9:58           ` Ilias Apalodimas
2024-09-27 11:29             ` Yunsheng Lin
2024-09-28  7:34               ` Ilias Apalodimas
2024-09-29  2:44                 ` Yunsheng Lin
2024-09-30  8:09                   ` Ilias Apalodimas
2024-09-30  8:38                     ` Yunsheng Lin
2024-10-01 13:32   ` Paolo Abeni
2024-10-02  2:34     ` Yunsheng Lin
2024-10-02  7:37       ` Paolo Abeni
2024-10-02  8:23         ` Ilias Apalodimas
2024-10-05 12:38         ` Yunsheng Lin
2024-10-02  6:46     ` Ilias Apalodimas
2024-10-02  6:51       ` Ilias Apalodimas
2024-09-25 13:31 ` [PATCH net v2 0/2] fix two bugs related to page_pool Yonglong Liu
2024-10-12 12:05 ` Yunsheng Lin
2024-10-15  0:14   ` Jakub Kicinski
2024-10-15 10:52     ` Yunsheng Lin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d123d288-4215-4a8c-9689-bbfe24c24b08@redhat.com \
    --to=pabeni@redhat.com \
    --cc=aleksander.lobakin@intel.com \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=fanghaiqing@huawei.com \
    --cc=hawk@kernel.org \
    --cc=ilias.apalodimas@linaro.org \
    --cc=kuba@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linyunsheng@huawei.com \
    --cc=liuyonglong@huawei.com \
    --cc=netdev@vger.kernel.org \
    --cc=zhangkun09@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).