* [PATCH net-next v1 0/2] page_pool: bnxt_en: unlink old page pool in queue api using helper
@ 2024-06-25 19:55 David Wei
2024-06-25 19:55 ` [PATCH net-next v1 1/2] page_pool: reintroduce page_pool_unlink_napi() David Wei
2024-06-25 19:55 ` [PATCH net-next v1 2/2] bnxt_en: unlink page pool when stopping Rx queue David Wei
0 siblings, 2 replies; 5+ messages in thread
From: David Wei @ 2024-06-25 19:55 UTC (permalink / raw)
To: Michael Chan, Andy Gospodarek, Jesper Dangaard Brouer,
Ilias Apalodimas, netdev
Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni
56ef27e3 unexported page_pool_unlink_napi() and renamed it to
page_pool_disable_direct_recycling(). This is because there was no
in-tree user of page_pool_unlink_napi().
Since then Rx queue API and an implementation in bnxt got merged. In the
bnxt implementation, it broadly follows the following steps: allocate
new queue memory + page pool, stop old rx queue, swap, then destroy old
queue memory + page pool. The existing NAPI instance is re-used.
The page pool to be destroyed is still linked to the re-used NAPI
instance. Freeing it as-is will trigger warnings in
page_pool_disable_direct_recycling(). In my initial patches I unlinked
very directly by setting pp.napi to NULL.
Instead, bring back page_pool_unlink_napi() and use that instead of
having a driver touch a core struct directly.
David Wei (2):
page_pool: reintroduce page_pool_unlink_napi()
bnxt_en: unlink page pool when stopping Rx queue
drivers/net/ethernet/broadcom/bnxt/bnxt.c | 6 +-----
include/net/page_pool/types.h | 5 +++++
net/core/page_pool.c | 6 ++++++
3 files changed, 12 insertions(+), 5 deletions(-)
--
2.43.0
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH net-next v1 1/2] page_pool: reintroduce page_pool_unlink_napi()
2024-06-25 19:55 [PATCH net-next v1 0/2] page_pool: bnxt_en: unlink old page pool in queue api using helper David Wei
@ 2024-06-25 19:55 ` David Wei
2024-06-25 23:39 ` Jakub Kicinski
2024-06-25 19:55 ` [PATCH net-next v1 2/2] bnxt_en: unlink page pool when stopping Rx queue David Wei
1 sibling, 1 reply; 5+ messages in thread
From: David Wei @ 2024-06-25 19:55 UTC (permalink / raw)
To: Michael Chan, Andy Gospodarek, Jesper Dangaard Brouer,
Ilias Apalodimas, netdev
Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni
56ef27e3 unexported page_pool_unlink_napi() and renamed it to
page_pool_disable_direct_recycling(). This is because there was no
in-tree user of page_pool_unlink_napi().
Since then Rx queue API and an implementation in bnxt got merged. In the
bnxt implementation, it broadly follows the following steps: allocate
new queue memory + page pool, stop old rx queue, swap, then destroy old
queue memory + page pool. The existing NAPI instance is re-used.
The page pool to be destroyed is still linked to the re-used NAPI
instance. Freeing it as-is will trigger warnings in
page_pool_disable_direct_recycling(). In my initial patches I unlinked
very directly by setting pp.napi to NULL.
Instead, bring back page_pool_unlink_napi() and use that instead of
having a driver touch a core struct directly.
Suggested-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: David Wei <dw@davidwei.uk>
---
include/net/page_pool/types.h | 5 +++++
net/core/page_pool.c | 6 ++++++
2 files changed, 11 insertions(+)
diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h
index 7e8477057f3d..aa3e615f1ae6 100644
--- a/include/net/page_pool/types.h
+++ b/include/net/page_pool/types.h
@@ -229,12 +229,17 @@ struct page_pool *page_pool_create_percpu(const struct page_pool_params *params,
struct xdp_mem_info;
#ifdef CONFIG_PAGE_POOL
+void page_pool_unlink_napi(struct page_pool *pool);
void page_pool_destroy(struct page_pool *pool);
void page_pool_use_xdp_mem(struct page_pool *pool, void (*disconnect)(void *),
const struct xdp_mem_info *mem);
void page_pool_put_page_bulk(struct page_pool *pool, void **data,
int count);
#else
+static inline void page_pool_unlink_napi(struct page_pool *pool)
+{
+}
+
static inline void page_pool_destroy(struct page_pool *pool)
{
}
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index 3927a0a7fa9a..ec274dde0e32 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -1021,6 +1021,11 @@ static void page_pool_disable_direct_recycling(struct page_pool *pool)
*/
WRITE_ONCE(pool->cpuid, -1);
+ page_pool_unlink_napi(pool);
+}
+
+void page_pool_unlink_napi(struct page_pool *pool)
+{
if (!pool->p.napi)
return;
@@ -1032,6 +1037,7 @@ static void page_pool_disable_direct_recycling(struct page_pool *pool)
WRITE_ONCE(pool->p.napi, NULL);
}
+EXPORT_SYMBOL(page_pool_unlink_napi);
void page_pool_destroy(struct page_pool *pool)
{
--
2.43.0
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH net-next v1 2/2] bnxt_en: unlink page pool when stopping Rx queue
2024-06-25 19:55 [PATCH net-next v1 0/2] page_pool: bnxt_en: unlink old page pool in queue api using helper David Wei
2024-06-25 19:55 ` [PATCH net-next v1 1/2] page_pool: reintroduce page_pool_unlink_napi() David Wei
@ 2024-06-25 19:55 ` David Wei
1 sibling, 0 replies; 5+ messages in thread
From: David Wei @ 2024-06-25 19:55 UTC (permalink / raw)
To: Michael Chan, Andy Gospodarek, Jesper Dangaard Brouer,
Ilias Apalodimas, netdev
Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni
Have bnxt call the restored page_pool_unlink_napi() to unlink the old
page pool when resetting a queue, instead of setting a core struct to
NULL directly.
Signed-off-by: David Wei <dw@davidwei.uk>
---
drivers/net/ethernet/broadcom/bnxt/bnxt.c | 6 +-----
1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index 1bd0c5973252..2884ab006ea2 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -15049,11 +15049,6 @@ static void bnxt_queue_mem_free(struct net_device *dev, void *qmem)
bnxt_free_one_rx_ring(bp, rxr);
bnxt_free_one_rx_agg_ring(bp, rxr);
- /* At this point, this NAPI instance has another page pool associated
- * with it. Disconnect here before freeing the old page pool to avoid
- * warnings.
- */
- rxr->page_pool->p.napi = NULL;
page_pool_destroy(rxr->page_pool);
rxr->page_pool = NULL;
@@ -15173,6 +15168,7 @@ static int bnxt_queue_stop(struct net_device *dev, void *qmem, int idx)
bnxt_hwrm_rx_ring_free(bp, rxr, false);
bnxt_hwrm_rx_agg_ring_free(bp, rxr, false);
rxr->rx_next_cons = 0;
+ page_pool_unlink_napi(rxr->page_pool);
memcpy(qmem, rxr, sizeof(*rxr));
bnxt_init_rx_ring_struct(bp, qmem);
--
2.43.0
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH net-next v1 1/2] page_pool: reintroduce page_pool_unlink_napi()
2024-06-25 19:55 ` [PATCH net-next v1 1/2] page_pool: reintroduce page_pool_unlink_napi() David Wei
@ 2024-06-25 23:39 ` Jakub Kicinski
2024-06-26 0:10 ` David Wei
0 siblings, 1 reply; 5+ messages in thread
From: Jakub Kicinski @ 2024-06-25 23:39 UTC (permalink / raw)
To: David Wei
Cc: Michael Chan, Andy Gospodarek, Jesper Dangaard Brouer,
Ilias Apalodimas, netdev, David S. Miller, Eric Dumazet,
Paolo Abeni
On Tue, 25 Jun 2024 12:55:21 -0700 David Wei wrote:
> #ifdef CONFIG_PAGE_POOL
> +void page_pool_unlink_napi(struct page_pool *pool);
> void page_pool_destroy(struct page_pool *pool);
> void page_pool_use_xdp_mem(struct page_pool *pool, void (*disconnect)(void *),
> const struct xdp_mem_info *mem);
> void page_pool_put_page_bulk(struct page_pool *pool, void **data,
> int count);
> #else
> +static inline void page_pool_unlink_napi(struct page_pool *pool)
> +{
> +}
All callers must select PAGE_POOL, I don't think we need the empty
static inline in this particular case.
> static inline void page_pool_destroy(struct page_pool *pool)
> {
> }
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index 3927a0a7fa9a..ec274dde0e32 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -1021,6 +1021,11 @@ static void page_pool_disable_direct_recycling(struct page_pool *pool)
> */
> WRITE_ONCE(pool->cpuid, -1);
>
> + page_pool_unlink_napi(pool);
No need to split page_pool_disable_direct_recycling()
into two, we can write cpuid, it won't hurt.
The purpose of the function didn't really change when Olek
renamed it. Unlinking NAPI is also precisely to prevent recycling.
So you can either export page_pool_disable_direct_recycling()
add a wrapper called page_pool_unlink_napi(), or come up with
another name... But there's no need to split it.
> +}
> +
> +void page_pool_unlink_napi(struct page_pool *pool)
> +{
> if (!pool->p.napi)
> return;
>
> @@ -1032,6 +1037,7 @@ static void p
--
pw-bot: cr
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH net-next v1 1/2] page_pool: reintroduce page_pool_unlink_napi()
2024-06-25 23:39 ` Jakub Kicinski
@ 2024-06-26 0:10 ` David Wei
0 siblings, 0 replies; 5+ messages in thread
From: David Wei @ 2024-06-26 0:10 UTC (permalink / raw)
To: Jakub Kicinski
Cc: Michael Chan, Andy Gospodarek, Jesper Dangaard Brouer,
Ilias Apalodimas, netdev, David S. Miller, Eric Dumazet,
Paolo Abeni
On 2024-06-25 16:39, Jakub Kicinski wrote:
> On Tue, 25 Jun 2024 12:55:21 -0700 David Wei wrote:
>> #ifdef CONFIG_PAGE_POOL
>> +void page_pool_unlink_napi(struct page_pool *pool);
>> void page_pool_destroy(struct page_pool *pool);
>> void page_pool_use_xdp_mem(struct page_pool *pool, void (*disconnect)(void *),
>> const struct xdp_mem_info *mem);
>> void page_pool_put_page_bulk(struct page_pool *pool, void **data,
>> int count);
>> #else
>> +static inline void page_pool_unlink_napi(struct page_pool *pool)
>> +{
>> +}
>
> All callers must select PAGE_POOL, I don't think we need the empty
> static inline in this particular case.
Got it, I'll remove this.
>
>> static inline void page_pool_destroy(struct page_pool *pool)
>> {
>> }
>> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
>> index 3927a0a7fa9a..ec274dde0e32 100644
>> --- a/net/core/page_pool.c
>> +++ b/net/core/page_pool.c
>> @@ -1021,6 +1021,11 @@ static void page_pool_disable_direct_recycling(struct page_pool *pool)
>> */
>> WRITE_ONCE(pool->cpuid, -1);
>>
>> + page_pool_unlink_napi(pool);
>
> No need to split page_pool_disable_direct_recycling()
> into two, we can write cpuid, it won't hurt.
Ah, I see.
>
> The purpose of the function didn't really change when Olek
> renamed it. Unlinking NAPI is also precisely to prevent recycling.
> So you can either export page_pool_disable_direct_recycling()
> add a wrapper called page_pool_unlink_napi(), or come up with
> another name... But there's no need to split it.
Thanks for the suggestions. I'll export
page_pool_disable_direct_recycling().
>
>> +}
>> +
>> +void page_pool_unlink_napi(struct page_pool *pool)
>> +{
>> if (!pool->p.napi)
>> return;
>>
>> @@ -1032,6 +1037,7 @@ static void p
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2024-06-26 0:10 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-06-25 19:55 [PATCH net-next v1 0/2] page_pool: bnxt_en: unlink old page pool in queue api using helper David Wei
2024-06-25 19:55 ` [PATCH net-next v1 1/2] page_pool: reintroduce page_pool_unlink_napi() David Wei
2024-06-25 23:39 ` Jakub Kicinski
2024-06-26 0:10 ` David Wei
2024-06-25 19:55 ` [PATCH net-next v1 2/2] bnxt_en: unlink page pool when stopping Rx queue David Wei
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).