From: Yunsheng Lin <linyunsheng@huawei.com>
To: "Yunsheng Lin" <yunshenglin0825@gmail.com>,
"Toke Høiland-Jørgensen" <toke@redhat.com>,
davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com
Cc: <zhangkun09@huawei.com>, <liuyonglong@huawei.com>,
<fanghaiqing@huawei.com>,
Alexander Lobakin <aleksander.lobakin@intel.com>,
Xuan Zhuo <xuanzhuo@linux.alibaba.com>,
Jesper Dangaard Brouer <hawk@kernel.org>,
Ilias Apalodimas <ilias.apalodimas@linaro.org>,
Eric Dumazet <edumazet@google.com>,
Simon Horman <horms@kernel.org>, <netdev@vger.kernel.org>,
<linux-kernel@vger.kernel.org>
Subject: Re: [PATCH net-next v7 2/8] page_pool: fix timing for checking and disabling napi_local
Date: Tue, 14 Jan 2025 21:03:43 +0800 [thread overview]
Message-ID: <22de6033-744e-486e-bbd9-8950249cd018@huawei.com> (raw)
In-Reply-To: <5059df11-a85b-4404-8c24-a9ccd76924f3@gmail.com>
On 2025/1/11 13:24, Yunsheng Lin wrote:
...
>>> }
>>> void page_pool_put_unrefed_netmem(struct page_pool *pool, netmem_ref netmem,
>>> @@ -1165,6 +1172,12 @@ void page_pool_destroy(struct page_pool *pool)
>>> if (!page_pool_release(pool))
>>> return;
>>> + /* Paired with rcu lock in page_pool_napi_local() to enable clearing
>>> + * of pool->p.napi in page_pool_disable_direct_recycling() is seen
>>> + * before returning to driver to free the napi instance.
>>> + */
>>> + synchronize_rcu();
>>
>> Most drivers call page_pool_destroy() in a loop for each RX queue, so
>> now you're introducing a full synchronize_rcu() wait for each queue.
>> That can delay tearing down the device significantly, so I don't think
>> this is a good idea.
>
> synchronize_rcu() is called after page_pool_release(pool), which means
> it is only called when there are some inflight pages, so there is not
> necessarily a full synchronize_rcu() wait for each queue.
>
> Anyway, it seems that there are some cases that need explicit
> synchronize_rcu() and some cases depending on the other API providing
> synchronize_rcu() semantics, maybe we provide two diffferent API for
> both cases like the netif_napi_del()/__netif_napi_del() APIs do?
As the synchronize_rcu() is also needed to fix the DMA API misuse problem,
we can not really handle it like netif_napi_del()/__netif_napi_del() APIs
do, the best I can think is something like below:
bool need_sync = false;
for (each queue)
need_sync |= page_pool_destroy_prepare(queue->pool);
if (need_sync)
synchronize_rcu()
for (each queue)
page_pool_destroy_commit(queue->pool);
But I am not sure if the above worth the effort or not for now as the
synchronize_rcu() is only called for the inflight page case.
Any better idea? If not, maybe we can optimize the above later if
the synchronize_rcu() does turn out to be a problem.
>
next prev parent reply other threads:[~2025-01-14 13:03 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-01-10 13:06 [PATCH net-next v7 0/8] fix two bugs related to page_pool Yunsheng Lin
2025-01-10 13:06 ` [PATCH net-next v7 1/8] page_pool: introduce page_pool_get_pp() API Yunsheng Lin
2025-01-10 13:06 ` [PATCH net-next v7 2/8] page_pool: fix timing for checking and disabling napi_local Yunsheng Lin
2025-01-10 15:40 ` Toke Høiland-Jørgensen
2025-01-11 5:24 ` Yunsheng Lin
2025-01-14 13:03 ` Yunsheng Lin [this message]
2025-01-20 11:24 ` Toke Høiland-Jørgensen
2025-01-22 11:02 ` Yunsheng Lin
2025-01-24 17:13 ` Toke Høiland-Jørgensen
2025-01-25 14:21 ` Yunsheng Lin
2025-01-27 13:47 ` Toke Høiland-Jørgensen
2025-02-04 13:51 ` Yunsheng Lin
2025-01-10 13:06 ` [PATCH net-next v7 3/8] page_pool: fix IOMMU crash when driver has already unbound Yunsheng Lin
2025-01-15 16:29 ` Jesper Dangaard Brouer
2025-01-16 12:52 ` Yunsheng Lin
2025-01-16 16:09 ` Jesper Dangaard Brouer
2025-01-17 11:56 ` Yunsheng Lin
2025-01-17 16:56 ` Jesper Dangaard Brouer
2025-01-18 13:36 ` Yunsheng Lin
2025-01-10 13:06 ` [PATCH net-next v7 4/8] page_pool: support unlimited number of inflight pages Yunsheng Lin
2025-01-10 13:06 ` [PATCH net-next v7 5/8] page_pool: skip dma sync operation for " Yunsheng Lin
2025-01-10 13:07 ` [PATCH net-next v7 6/8] page_pool: use list instead of ptr_ring for ring cache Yunsheng Lin
2025-01-10 13:07 ` [PATCH net-next v7 7/8] page_pool: batch refilling pages to reduce atomic operation Yunsheng Lin
2025-01-10 13:07 ` [PATCH net-next v7 8/8] page_pool: use list instead of array for alloc cache Yunsheng Lin
2025-01-14 14:31 ` [PATCH net-next v7 0/8] fix two bugs related to page_pool Jesper Dangaard Brouer
2025-01-15 11:33 ` Yunsheng Lin
2025-01-15 17:40 ` Jesper Dangaard Brouer
2025-01-16 12:52 ` Yunsheng Lin
2025-01-16 18:02 ` Jesper Dangaard Brouer
2025-01-17 11:35 ` Yunsheng Lin
2025-01-18 8:04 ` Jesper Dangaard Brouer
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=22de6033-744e-486e-bbd9-8950249cd018@huawei.com \
--to=linyunsheng@huawei.com \
--cc=aleksander.lobakin@intel.com \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=fanghaiqing@huawei.com \
--cc=hawk@kernel.org \
--cc=horms@kernel.org \
--cc=ilias.apalodimas@linaro.org \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=liuyonglong@huawei.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=toke@redhat.com \
--cc=xuanzhuo@linux.alibaba.com \
--cc=yunshenglin0825@gmail.com \
--cc=zhangkun09@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).