All of lore.kernel.org
 help / color / mirror / Atom feed
From: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
To: Yunsheng Lin <linyunsheng@huawei.com>
Cc: <zhangkun09@huawei.com>, <fanghaiqing@huawei.com>,
	<liuyonglong@huawei.com>, Yunsheng Lin <linyunsheng@huawei.com>,
	Alexander  Lobakin <aleksander.lobakin@intel.com>,
	Jesper Dangaard Brouer <hawk@kernel.org>,
	Ilias Apalodimas <ilias.apalodimas@linaro.org>,
	Eric  Dumazet <edumazet@google.com>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <davem@davemloft.net>,
	<kuba@kernel.org>, <pabeni@redhat.com>
Subject: Re: [PATCH net-next v3 2/3] page_pool: fix timing for checking and disabling napi_local
Date: Thu, 7 Nov 2024 14:17:51 +0800	[thread overview]
Message-ID: <1730960271.064656-1-xuanzhuo@linux.alibaba.com> (raw)
In-Reply-To: <20241022032214.3915232-3-linyunsheng@huawei.com>

On Tue, 22 Oct 2024 11:22:12 +0800, Yunsheng Lin <linyunsheng@huawei.com> wrote:
> page_pool page may be freed from skb_defer_free_flush() in
> softirq context without binding to any specific napi, it
> may cause use-after-free problem due to the below time window,
> as below, CPU1 may still access napi->list_owner after CPU0
> free the napi memory:
>
>             CPU 0                           CPU1
>       page_pool_destroy()          skb_defer_free_flush()
>              .                               .
>              .                napi = READ_ONCE(pool->p.napi);
>              .                               .
> page_pool_disable_direct_recycling()         .
>    driver free napi memory                   .
>              .                               .
>              .       napi && READ_ONCE(napi->list_owner) == cpuid
>              .                               .
>
> Use rcu mechanism to avoid the above problem.
>
> Note, the above was found during code reviewing on how to fix
> the problem in [1].
>
> 1. https://lore.kernel.org/lkml/8067f204-1380-4d37-8ffd-007fc6f26738@kernel.org/T/
>
> Fixes: dd64b232deb8 ("page_pool: unlink from napi during destroy")
> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>


Reviewed-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>

> CC: Alexander Lobakin <aleksander.lobakin@intel.com>
> ---
> As the IOMMU fix patch depends on synchronize_rcu() added in this
> patch and the time window is so small that it doesn't seem to be
> an urgent fix, so target the net-next as the IOMMU fix patch does.
> ---
>  net/core/page_pool.c | 15 ++++++++++++++-
>  1 file changed, 14 insertions(+), 1 deletion(-)
>
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index a813d30d2135..dd497f5c927d 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -795,6 +795,7 @@ __page_pool_put_page(struct page_pool *pool, netmem_ref netmem,
>  static bool page_pool_napi_local(const struct page_pool *pool)
>  {
>  	const struct napi_struct *napi;
> +	bool napi_local;
>  	u32 cpuid;
>
>  	if (unlikely(!in_softirq()))
> @@ -810,9 +811,15 @@ static bool page_pool_napi_local(const struct page_pool *pool)
>  	if (READ_ONCE(pool->cpuid) == cpuid)
>  		return true;
>
> +	/* Synchronizated with page_pool_destory() to avoid use-after-free
> +	 * for 'napi'.
> +	 */
> +	rcu_read_lock();
>  	napi = READ_ONCE(pool->p.napi);
> +	napi_local = napi && READ_ONCE(napi->list_owner) == cpuid;
> +	rcu_read_unlock();
>
> -	return napi && READ_ONCE(napi->list_owner) == cpuid;
> +	return napi_local;
>  }
>
>  void page_pool_put_unrefed_netmem(struct page_pool *pool, netmem_ref netmem,
> @@ -1126,6 +1133,12 @@ void page_pool_destroy(struct page_pool *pool)
>  	if (!page_pool_release(pool))
>  		return;
>
> +	/* Paired with rcu lock in page_pool_napi_local() to enable clearing
> +	 * of pool->p.napi in page_pool_disable_direct_recycling() is seen
> +	 * before returning to driver to free the napi instance.
> +	 */
> +	synchronize_rcu();
> +
>  	page_pool_detached(pool);
>  	pool->defer_start = jiffies;
>  	pool->defer_warn  = jiffies + DEFER_WARN_INTERVAL;
> --
> 2.33.0
>
>

  reply	other threads:[~2024-11-07  6:18 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-10-22  3:22 [PATCH net-next v3 0/3] fix two bugs related to page_pool Yunsheng Lin
2024-10-22  3:22 ` [Intel-wired-lan] " Yunsheng Lin
2024-10-22  3:22 ` [PATCH net-next v3 1/3] page_pool: introduce page_pool_to_pp() API Yunsheng Lin
2024-10-22  3:22   ` [Intel-wired-lan] " Yunsheng Lin
2024-10-22  3:22 ` [PATCH net-next v3 2/3] page_pool: fix timing for checking and disabling napi_local Yunsheng Lin
2024-11-07  6:17   ` Xuan Zhuo [this message]
2024-10-22  3:22 ` [PATCH net-next v3 3/3] page_pool: fix IOMMU crash when driver has already unbound Yunsheng Lin
2024-10-22 16:40   ` Simon Horman
2024-10-22 18:14   ` Jesper Dangaard Brouer
2024-10-23  8:59     ` Yunsheng Lin
2024-10-24 14:40       ` Toke Høiland-Jørgensen
2024-10-25  3:20         ` Yunsheng Lin
2024-10-25 11:16           ` Toke Høiland-Jørgensen
2024-10-25 14:07             ` Jesper Dangaard Brouer
2024-10-26  7:33               ` Yunsheng Lin
2024-11-06 13:25                 ` Jesper Dangaard Brouer
2024-11-06 15:57                   ` Jesper Dangaard Brouer
2024-11-06 19:55                     ` Alexander Duyck
2024-11-07 11:10                       ` Yunsheng Lin
2024-11-07 11:09                     ` Yunsheng Lin
2024-11-11 11:31                 ` Yunsheng Lin
2024-11-11 18:51                   ` Toke Høiland-Jørgensen
2024-11-12 12:22                     ` Yunsheng Lin
2024-11-12 14:19                       ` Jesper Dangaard Brouer
2024-11-13 12:21                         ` Yunsheng Lin
2024-11-18  9:08                         ` Yunsheng Lin
2024-11-18 15:11                           ` Jesper Dangaard Brouer
2024-10-26  7:32             ` Yunsheng Lin
2024-10-29 13:58               ` Toke Høiland-Jørgensen
2024-10-30 11:30                 ` Yunsheng Lin
2024-10-30 11:57                   ` Toke Høiland-Jørgensen
2024-10-31 12:17                     ` Yunsheng Lin
2024-10-31 16:18                       ` Toke Høiland-Jørgensen
2024-11-01 11:11                         ` Yunsheng Lin
2024-11-05 20:11                           ` Jesper Dangaard Brouer
2024-11-06 10:56                             ` Yunsheng Lin
2024-11-06 14:17                               ` Robin Murphy
2024-11-07  8:41                               ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1730960271.064656-1-xuanzhuo@linux.alibaba.com \
    --to=xuanzhuo@linux.alibaba.com \
    --cc=aleksander.lobakin@intel.com \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=fanghaiqing@huawei.com \
    --cc=hawk@kernel.org \
    --cc=ilias.apalodimas@linaro.org \
    --cc=kuba@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linyunsheng@huawei.com \
    --cc=liuyonglong@huawei.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=zhangkun09@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.