* [PATCH net-next 01/15] net: page_pool: add page_pool_get()
2025-08-20 2:56 [PATCH net-next 00/15] eth: fbnic: support queue API and zero-copy Rx Jakub Kicinski
@ 2025-08-20 2:56 ` Jakub Kicinski
2025-08-20 10:35 ` Jesper Dangaard Brouer
` (2 more replies)
2025-08-20 2:56 ` [PATCH net-next 02/15] eth: fbnic: move page pool pointer from NAPI to the ring struct Jakub Kicinski
` (15 subsequent siblings)
16 siblings, 3 replies; 33+ messages in thread
From: Jakub Kicinski @ 2025-08-20 2:56 UTC (permalink / raw)
To: davem
Cc: netdev, edumazet, pabeni, andrew+netdev, horms, almasrymina,
michael.chan, tariqt, dtatulea, hawk, ilias.apalodimas,
alexanderduyck, sdf, Jakub Kicinski
There is a page_pool_put() function but no get equivalent.
Having multiple references to a page pool is quite useful.
It avoids branching in create / destroy paths in drivers
which support memory providers.
Use the new helper in bnxt.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
include/net/page_pool/helpers.h | 5 +++++
drivers/net/ethernet/broadcom/bnxt/bnxt.c | 11 +++++------
2 files changed, 10 insertions(+), 6 deletions(-)
diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h
index db180626be06..aa3719f28216 100644
--- a/include/net/page_pool/helpers.h
+++ b/include/net/page_pool/helpers.h
@@ -489,6 +489,11 @@ page_pool_dma_sync_netmem_for_cpu(const struct page_pool *pool,
offset, dma_sync_size);
}
+static inline void page_pool_get(struct page_pool *pool)
+{
+ refcount_inc(&pool->user_cnt);
+}
+
static inline bool page_pool_put(struct page_pool *pool)
{
return refcount_dec_and_test(&pool->user_cnt);
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index 2d4fdf5a0dc5..1a571a90e6be 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -3797,8 +3797,7 @@ static void bnxt_free_rx_rings(struct bnxt *bp)
xdp_rxq_info_unreg(&rxr->xdp_rxq);
page_pool_destroy(rxr->page_pool);
- if (bnxt_separate_head_pool(rxr))
- page_pool_destroy(rxr->head_pool);
+ page_pool_destroy(rxr->head_pool);
rxr->page_pool = rxr->head_pool = NULL;
kfree(rxr->rx_agg_bmap);
@@ -3845,6 +3844,8 @@ static int bnxt_alloc_rx_page_pool(struct bnxt *bp,
pool = page_pool_create(&pp);
if (IS_ERR(pool))
goto err_destroy_pp;
+ } else {
+ page_pool_get(pool);
}
rxr->head_pool = pool;
@@ -15900,8 +15901,7 @@ static int bnxt_queue_mem_alloc(struct net_device *dev, void *qmem, int idx)
xdp_rxq_info_unreg(&clone->xdp_rxq);
err_page_pool_destroy:
page_pool_destroy(clone->page_pool);
- if (bnxt_separate_head_pool(clone))
- page_pool_destroy(clone->head_pool);
+ page_pool_destroy(clone->head_pool);
clone->page_pool = NULL;
clone->head_pool = NULL;
return rc;
@@ -15919,8 +15919,7 @@ static void bnxt_queue_mem_free(struct net_device *dev, void *qmem)
xdp_rxq_info_unreg(&rxr->xdp_rxq);
page_pool_destroy(rxr->page_pool);
- if (bnxt_separate_head_pool(rxr))
- page_pool_destroy(rxr->head_pool);
+ page_pool_destroy(rxr->head_pool);
rxr->page_pool = NULL;
rxr->head_pool = NULL;
--
2.50.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* Re: [PATCH net-next 01/15] net: page_pool: add page_pool_get()
2025-08-20 2:56 ` [PATCH net-next 01/15] net: page_pool: add page_pool_get() Jakub Kicinski
@ 2025-08-20 10:35 ` Jesper Dangaard Brouer
2025-08-20 10:58 ` Dragos Tatulea
2025-08-20 23:11 ` Mina Almasry
2 siblings, 0 replies; 33+ messages in thread
From: Jesper Dangaard Brouer @ 2025-08-20 10:35 UTC (permalink / raw)
To: Jakub Kicinski, davem
Cc: netdev, edumazet, pabeni, andrew+netdev, horms, almasrymina,
michael.chan, tariqt, dtatulea, ilias.apalodimas, alexanderduyck,
sdf
On 20/08/2025 04.56, Jakub Kicinski wrote:
> There is a page_pool_put() function but no get equivalent.
> Having multiple references to a page pool is quite useful.
> It avoids branching in create / destroy paths in drivers
> which support memory providers.
>
> Use the new helper in bnxt.
>
> Signed-off-by: Jakub Kicinski<kuba@kernel.org>
> ---
> include/net/page_pool/helpers.h | 5 +++++
> drivers/net/ethernet/broadcom/bnxt/bnxt.c | 11 +++++------
> 2 files changed, 10 insertions(+), 6 deletions(-)
LGTM
Acked-by: Jesper Dangaard Brouer <hawk@kernel.org>
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH net-next 01/15] net: page_pool: add page_pool_get()
2025-08-20 2:56 ` [PATCH net-next 01/15] net: page_pool: add page_pool_get() Jakub Kicinski
2025-08-20 10:35 ` Jesper Dangaard Brouer
@ 2025-08-20 10:58 ` Dragos Tatulea
2025-08-20 23:11 ` Mina Almasry
2 siblings, 0 replies; 33+ messages in thread
From: Dragos Tatulea @ 2025-08-20 10:58 UTC (permalink / raw)
To: Jakub Kicinski, davem
Cc: netdev, edumazet, pabeni, andrew+netdev, horms, almasrymina,
michael.chan, tariqt, hawk, ilias.apalodimas, alexanderduyck, sdf
On Wed Aug 20, 2025 at 2:56 AM UTC, Jakub Kicinski wrote:
> There is a page_pool_put() function but no get equivalent.
> Having multiple references to a page pool is quite useful.
> It avoids branching in create / destroy paths in drivers
> which support memory providers.
>
> Use the new helper in bnxt.
>
> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Nice! Will take a note to use it in mlx5 as well once it
becomes available.
FWIW:
Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com>
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH net-next 01/15] net: page_pool: add page_pool_get()
2025-08-20 2:56 ` [PATCH net-next 01/15] net: page_pool: add page_pool_get() Jakub Kicinski
2025-08-20 10:35 ` Jesper Dangaard Brouer
2025-08-20 10:58 ` Dragos Tatulea
@ 2025-08-20 23:11 ` Mina Almasry
2 siblings, 0 replies; 33+ messages in thread
From: Mina Almasry @ 2025-08-20 23:11 UTC (permalink / raw)
To: Jakub Kicinski
Cc: davem, netdev, edumazet, pabeni, andrew+netdev, horms,
michael.chan, tariqt, dtatulea, hawk, ilias.apalodimas,
alexanderduyck, sdf
On Tue, Aug 19, 2025 at 7:57 PM Jakub Kicinski <kuba@kernel.org> wrote:
>
> There is a page_pool_put() function but no get equivalent.
> Having multiple references to a page pool is quite useful.
> It avoids branching in create / destroy paths in drivers
> which support memory providers.
>
> Use the new helper in bnxt.
>
> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Reviewed-by: Mina Almasry <almasrymina@google.com>
--
Thanks,
Mina
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH net-next 02/15] eth: fbnic: move page pool pointer from NAPI to the ring struct
2025-08-20 2:56 [PATCH net-next 00/15] eth: fbnic: support queue API and zero-copy Rx Jakub Kicinski
2025-08-20 2:56 ` [PATCH net-next 01/15] net: page_pool: add page_pool_get() Jakub Kicinski
@ 2025-08-20 2:56 ` Jakub Kicinski
2025-08-20 2:56 ` [PATCH net-next 03/15] eth: fbnic: move xdp_rxq_info_reg() to resource alloc Jakub Kicinski
` (14 subsequent siblings)
16 siblings, 0 replies; 33+ messages in thread
From: Jakub Kicinski @ 2025-08-20 2:56 UTC (permalink / raw)
To: davem
Cc: netdev, edumazet, pabeni, andrew+netdev, horms, almasrymina,
michael.chan, tariqt, dtatulea, hawk, ilias.apalodimas,
alexanderduyck, sdf, Jakub Kicinski
In preparation for memory providers we need a closer association
between queues and page pools. We used to have a page pool at the
NAPI level to serve all associated queues but with MP the queues
under a NAPI may no longer be created equal.
The "ring" structure in fbnic is a descriptor ring. We have separate
"rings" for payload and header pages ("to device"), as well as a ring
for completions ("from device"). Technically we only need the page
pool pointers in the "to device" rings, so adding the pointer to
the ring struct is a bit wasteful. But it makes passing the structures
around much easier.
For now both "to device" rings store a pointer to the same
page pool. Using more than one queue per NAPI is extremely rare
so don't bother trying to share a single page pool between queues.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
drivers/net/ethernet/meta/fbnic/fbnic_txrx.h | 16 ++--
drivers/net/ethernet/meta/fbnic/fbnic_txrx.c | 83 +++++++++++---------
2 files changed, 55 insertions(+), 44 deletions(-)
diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.h b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.h
index 873440ca6a31..a935a1acfb3e 100644
--- a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.h
+++ b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.h
@@ -121,11 +121,16 @@ struct fbnic_ring {
u32 head, tail; /* Head/Tail of ring */
- /* Deferred_head is used to cache the head for TWQ1 if an attempt
- * is made to clean TWQ1 with zero napi_budget. We do not use it for
- * any other ring.
- */
- s32 deferred_head;
+ union {
+ /* Rx BDQs only */
+ struct page_pool *page_pool;
+
+ /* Deferred_head is used to cache the head for TWQ1 if
+ * an attempt is made to clean TWQ1 with zero napi_budget.
+ * We do not use it for any other ring.
+ */
+ s32 deferred_head;
+ };
struct fbnic_queue_stats stats;
@@ -142,7 +147,6 @@ struct fbnic_q_triad {
struct fbnic_napi_vector {
struct napi_struct napi;
struct device *dev; /* Device for DMA unmapping */
- struct page_pool *page_pool;
struct fbnic_dev *fbd;
u16 v_idx;
diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
index fea4577e38d4..7f8bdb08db9f 100644
--- a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
+++ b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
@@ -640,7 +640,7 @@ static void fbnic_clean_twq1(struct fbnic_napi_vector *nv, bool pp_allow_direct,
FBNIC_TWD_TYPE_AL;
total_bytes += FIELD_GET(FBNIC_TWD_LEN_MASK, twd);
- page_pool_put_page(nv->page_pool, page, -1, pp_allow_direct);
+ page_pool_put_page(page->pp, page, -1, pp_allow_direct);
next_desc:
head++;
head &= ring->size_mask;
@@ -735,13 +735,13 @@ static struct page *fbnic_page_pool_get(struct fbnic_ring *ring,
}
static void fbnic_page_pool_drain(struct fbnic_ring *ring, unsigned int idx,
- struct fbnic_napi_vector *nv, int budget)
+ int budget)
{
struct fbnic_rx_buf *rx_buf = &ring->rx_buf[idx];
struct page *page = rx_buf->page;
if (!page_pool_unref_page(page, rx_buf->pagecnt_bias))
- page_pool_put_unrefed_page(nv->page_pool, page, -1, !!budget);
+ page_pool_put_unrefed_page(ring->page_pool, page, -1, !!budget);
rx_buf->page = NULL;
}
@@ -826,8 +826,8 @@ fbnic_clean_tcq(struct fbnic_napi_vector *nv, struct fbnic_q_triad *qt,
fbnic_clean_twq(nv, napi_budget, qt, ts_head, head0, head1);
}
-static void fbnic_clean_bdq(struct fbnic_napi_vector *nv, int napi_budget,
- struct fbnic_ring *ring, unsigned int hw_head)
+static void fbnic_clean_bdq(struct fbnic_ring *ring, unsigned int hw_head,
+ int napi_budget)
{
unsigned int head = ring->head;
@@ -835,7 +835,7 @@ static void fbnic_clean_bdq(struct fbnic_napi_vector *nv, int napi_budget,
return;
do {
- fbnic_page_pool_drain(ring, head, nv, napi_budget);
+ fbnic_page_pool_drain(ring, head, napi_budget);
head++;
head &= ring->size_mask;
@@ -865,7 +865,7 @@ static void fbnic_bd_prep(struct fbnic_ring *bdq, u16 id, struct page *page)
} while (--i);
}
-static void fbnic_fill_bdq(struct fbnic_napi_vector *nv, struct fbnic_ring *bdq)
+static void fbnic_fill_bdq(struct fbnic_ring *bdq)
{
unsigned int count = fbnic_desc_unused(bdq);
unsigned int i = bdq->tail;
@@ -876,7 +876,7 @@ static void fbnic_fill_bdq(struct fbnic_napi_vector *nv, struct fbnic_ring *bdq)
do {
struct page *page;
- page = page_pool_dev_alloc_pages(nv->page_pool);
+ page = page_pool_dev_alloc_pages(bdq->page_pool);
if (!page) {
u64_stats_update_begin(&bdq->stats.syncp);
bdq->stats.rx.alloc_failed++;
@@ -997,7 +997,7 @@ static void fbnic_add_rx_frag(struct fbnic_napi_vector *nv, u64 rcd,
}
}
-static void fbnic_put_pkt_buff(struct fbnic_napi_vector *nv,
+static void fbnic_put_pkt_buff(struct fbnic_q_triad *qt,
struct fbnic_pkt_buff *pkt, int budget)
{
struct page *page;
@@ -1014,12 +1014,13 @@ static void fbnic_put_pkt_buff(struct fbnic_napi_vector *nv,
while (nr_frags--) {
page = skb_frag_page(&shinfo->frags[nr_frags]);
- page_pool_put_full_page(nv->page_pool, page, !!budget);
+ page_pool_put_full_page(qt->sub1.page_pool, page,
+ !!budget);
}
}
page = virt_to_page(pkt->buff.data_hard_start);
- page_pool_put_full_page(nv->page_pool, page, !!budget);
+ page_pool_put_full_page(qt->sub0.page_pool, page, !!budget);
}
static struct sk_buff *fbnic_build_skb(struct fbnic_napi_vector *nv,
@@ -1274,7 +1275,7 @@ static int fbnic_clean_rcq(struct fbnic_napi_vector *nv,
dropped++;
}
- fbnic_put_pkt_buff(nv, pkt, 1);
+ fbnic_put_pkt_buff(qt, pkt, 1);
}
pkt->buff.data_hard_start = NULL;
@@ -1307,12 +1308,12 @@ static int fbnic_clean_rcq(struct fbnic_napi_vector *nv,
/* Unmap and free processed buffers */
if (head0 >= 0)
- fbnic_clean_bdq(nv, budget, &qt->sub0, head0);
- fbnic_fill_bdq(nv, &qt->sub0);
+ fbnic_clean_bdq(&qt->sub0, head0, budget);
+ fbnic_fill_bdq(&qt->sub0);
if (head1 >= 0)
- fbnic_clean_bdq(nv, budget, &qt->sub1, head1);
- fbnic_fill_bdq(nv, &qt->sub1);
+ fbnic_clean_bdq(&qt->sub1, head1, budget);
+ fbnic_fill_bdq(&qt->sub1);
/* Record the current head/tail of the queue */
if (rcq->head != head) {
@@ -1462,6 +1463,12 @@ static void fbnic_remove_rx_ring(struct fbnic_net *fbn,
fbn->rx[rxr->q_idx] = NULL;
}
+static void fbnic_free_qt_page_pools(struct fbnic_q_triad *qt)
+{
+ page_pool_destroy(qt->sub0.page_pool);
+ page_pool_destroy(qt->sub1.page_pool);
+}
+
static void fbnic_free_napi_vector(struct fbnic_net *fbn,
struct fbnic_napi_vector *nv)
{
@@ -1479,10 +1486,10 @@ static void fbnic_free_napi_vector(struct fbnic_net *fbn,
fbnic_remove_rx_ring(fbn, &nv->qt[i].sub0);
fbnic_remove_rx_ring(fbn, &nv->qt[i].sub1);
fbnic_remove_rx_ring(fbn, &nv->qt[i].cmpl);
+ fbnic_free_qt_page_pools(&nv->qt[i]);
}
fbnic_napi_free_irq(fbd, nv);
- page_pool_destroy(nv->page_pool);
netif_napi_del(&nv->napi);
fbn->napi[fbnic_napi_idx(nv)] = NULL;
kfree(nv);
@@ -1500,13 +1507,14 @@ void fbnic_free_napi_vectors(struct fbnic_net *fbn)
#define FBNIC_PAGE_POOL_FLAGS \
(PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV)
-static int fbnic_alloc_nv_page_pool(struct fbnic_net *fbn,
- struct fbnic_napi_vector *nv)
+static int
+fbnic_alloc_qt_page_pools(struct fbnic_net *fbn, struct fbnic_napi_vector *nv,
+ struct fbnic_q_triad *qt)
{
struct page_pool_params pp_params = {
.order = 0,
.flags = FBNIC_PAGE_POOL_FLAGS,
- .pool_size = (fbn->hpq_size + fbn->ppq_size) * nv->rxt_count,
+ .pool_size = fbn->hpq_size + fbn->ppq_size,
.nid = NUMA_NO_NODE,
.dev = nv->dev,
.dma_dir = DMA_BIDIRECTIONAL,
@@ -1533,7 +1541,9 @@ static int fbnic_alloc_nv_page_pool(struct fbnic_net *fbn,
if (IS_ERR(pp))
return PTR_ERR(pp);
- nv->page_pool = pp;
+ qt->sub0.page_pool = pp;
+ page_pool_get(pp);
+ qt->sub1.page_pool = pp;
return 0;
}
@@ -1599,17 +1609,10 @@ static int fbnic_alloc_napi_vector(struct fbnic_dev *fbd, struct fbnic_net *fbn,
/* Tie nv back to PCIe dev */
nv->dev = fbd->dev;
- /* Allocate page pool */
- if (rxq_count) {
- err = fbnic_alloc_nv_page_pool(fbn, nv);
- if (err)
- goto napi_del;
- }
-
/* Request the IRQ for napi vector */
err = fbnic_napi_request_irq(fbd, nv);
if (err)
- goto pp_destroy;
+ goto napi_del;
/* Initialize queue triads */
qt = nv->qt;
@@ -1679,10 +1682,14 @@ static int fbnic_alloc_napi_vector(struct fbnic_dev *fbd, struct fbnic_net *fbn,
fbnic_ring_init(&qt->cmpl, db, rxq_idx, FBNIC_RING_F_STATS);
fbn->rx[rxq_idx] = &qt->cmpl;
+ err = fbnic_alloc_qt_page_pools(fbn, nv, qt);
+ if (err)
+ goto free_ring_cur_qt;
+
err = xdp_rxq_info_reg(&qt->xdp_rxq, fbn->netdev, rxq_idx,
nv->napi.napi_id);
if (err)
- goto free_ring_cur_qt;
+ goto free_qt_pp;
/* Update Rx queue index */
rxt_count--;
@@ -1698,6 +1705,8 @@ static int fbnic_alloc_napi_vector(struct fbnic_dev *fbd, struct fbnic_net *fbn,
qt--;
xdp_rxq_info_unreg(&qt->xdp_rxq);
+free_qt_pp:
+ fbnic_free_qt_page_pools(qt);
free_ring_cur_qt:
fbnic_remove_rx_ring(fbn, &qt->sub0);
fbnic_remove_rx_ring(fbn, &qt->sub1);
@@ -1714,8 +1723,6 @@ static int fbnic_alloc_napi_vector(struct fbnic_dev *fbd, struct fbnic_net *fbn,
txt_count++;
}
fbnic_napi_free_irq(fbd, nv);
-pp_destroy:
- page_pool_destroy(nv->page_pool);
napi_del:
netif_napi_del(&nv->napi);
fbn->napi[fbnic_napi_idx(nv)] = NULL;
@@ -2019,7 +2026,7 @@ static int fbnic_alloc_nv_resources(struct fbnic_net *fbn,
/* Register XDP memory model for completion queue */
err = xdp_reg_mem_model(&nv->qt[i].xdp_rxq.mem,
MEM_TYPE_PAGE_POOL,
- nv->page_pool);
+ nv->qt[i].sub0.page_pool);
if (err)
goto xdp_unreg_mem_model;
@@ -2333,13 +2340,13 @@ void fbnic_flush(struct fbnic_net *fbn)
struct fbnic_q_triad *qt = &nv->qt[t];
/* Clean the work queues of unprocessed work */
- fbnic_clean_bdq(nv, 0, &qt->sub0, qt->sub0.tail);
- fbnic_clean_bdq(nv, 0, &qt->sub1, qt->sub1.tail);
+ fbnic_clean_bdq(&qt->sub0, qt->sub0.tail, 0);
+ fbnic_clean_bdq(&qt->sub1, qt->sub1.tail, 0);
/* Reset completion queue descriptor ring */
memset(qt->cmpl.desc, 0, qt->cmpl.size);
- fbnic_put_pkt_buff(nv, qt->cmpl.pkt, 0);
+ fbnic_put_pkt_buff(qt, qt->cmpl.pkt, 0);
memset(qt->cmpl.pkt, 0, sizeof(struct fbnic_pkt_buff));
}
}
@@ -2360,8 +2367,8 @@ void fbnic_fill(struct fbnic_net *fbn)
struct fbnic_q_triad *qt = &nv->qt[t];
/* Populate the header and payload BDQs */
- fbnic_fill_bdq(nv, &qt->sub0);
- fbnic_fill_bdq(nv, &qt->sub1);
+ fbnic_fill_bdq(&qt->sub0);
+ fbnic_fill_bdq(&qt->sub1);
}
}
}
--
2.50.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH net-next 03/15] eth: fbnic: move xdp_rxq_info_reg() to resource alloc
2025-08-20 2:56 [PATCH net-next 00/15] eth: fbnic: support queue API and zero-copy Rx Jakub Kicinski
2025-08-20 2:56 ` [PATCH net-next 01/15] net: page_pool: add page_pool_get() Jakub Kicinski
2025-08-20 2:56 ` [PATCH net-next 02/15] eth: fbnic: move page pool pointer from NAPI to the ring struct Jakub Kicinski
@ 2025-08-20 2:56 ` Jakub Kicinski
2025-08-20 2:56 ` [PATCH net-next 04/15] eth: fbnic: move page pool alloc to fbnic_alloc_rx_qt_resources() Jakub Kicinski
` (13 subsequent siblings)
16 siblings, 0 replies; 33+ messages in thread
From: Jakub Kicinski @ 2025-08-20 2:56 UTC (permalink / raw)
To: davem
Cc: netdev, edumazet, pabeni, andrew+netdev, horms, almasrymina,
michael.chan, tariqt, dtatulea, hawk, ilias.apalodimas,
alexanderduyck, sdf, Jakub Kicinski
Move rxq_info and mem model registration from fbnic_alloc_napi_vector()
and fbnic_alloc_nv_resources() to fbnic_alloc_rx_qt_resources().
The rxq_info is now registered later in the process, but that
should not cause any issues.
rxq_info lives in the fbnic_q_triad (qt) struct so qt init is a more
natural place. Encapsulating the logic in the qt functions will also
allow simplifying the cleanup in the NAPI related alloc functions
in the next commit.
Rx does not have a dedicated fbnic_free_rx_qt_resources(),
but we can use xdp_rxq_info_is_reg() to tell whether given
rxq_info was in use (effectively - if it's a qt for an Rx queue).
Having to pass nv into fbnic_alloc_rx_qt_resources() is not
great in terms of layering, but that's temporary, pp will
move soon..
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
drivers/net/ethernet/meta/fbnic/fbnic_txrx.c | 58 +++++++++-----------
1 file changed, 26 insertions(+), 32 deletions(-)
diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
index 7f8bdb08db9f..29a780f72c14 100644
--- a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
+++ b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
@@ -1482,7 +1482,6 @@ static void fbnic_free_napi_vector(struct fbnic_net *fbn,
}
for (j = 0; j < nv->rxt_count; j++, i++) {
- xdp_rxq_info_unreg(&nv->qt[i].xdp_rxq);
fbnic_remove_rx_ring(fbn, &nv->qt[i].sub0);
fbnic_remove_rx_ring(fbn, &nv->qt[i].sub1);
fbnic_remove_rx_ring(fbn, &nv->qt[i].cmpl);
@@ -1686,11 +1685,6 @@ static int fbnic_alloc_napi_vector(struct fbnic_dev *fbd, struct fbnic_net *fbn,
if (err)
goto free_ring_cur_qt;
- err = xdp_rxq_info_reg(&qt->xdp_rxq, fbn->netdev, rxq_idx,
- nv->napi.napi_id);
- if (err)
- goto free_qt_pp;
-
/* Update Rx queue index */
rxt_count--;
rxq_idx += v_count;
@@ -1704,8 +1698,6 @@ static int fbnic_alloc_napi_vector(struct fbnic_dev *fbd, struct fbnic_net *fbn,
while (rxt_count < nv->rxt_count) {
qt--;
- xdp_rxq_info_unreg(&qt->xdp_rxq);
-free_qt_pp:
fbnic_free_qt_page_pools(qt);
free_ring_cur_qt:
fbnic_remove_rx_ring(fbn, &qt->sub0);
@@ -1938,6 +1930,11 @@ static void fbnic_free_qt_resources(struct fbnic_net *fbn,
fbnic_free_ring_resources(dev, &qt->cmpl);
fbnic_free_ring_resources(dev, &qt->sub1);
fbnic_free_ring_resources(dev, &qt->sub0);
+
+ if (xdp_rxq_info_is_reg(&qt->xdp_rxq)) {
+ xdp_rxq_info_unreg_mem_model(&qt->xdp_rxq);
+ xdp_rxq_info_unreg(&qt->xdp_rxq);
+ }
}
static int fbnic_alloc_tx_qt_resources(struct fbnic_net *fbn,
@@ -1968,15 +1965,27 @@ static int fbnic_alloc_tx_qt_resources(struct fbnic_net *fbn,
}
static int fbnic_alloc_rx_qt_resources(struct fbnic_net *fbn,
+ struct fbnic_napi_vector *nv,
struct fbnic_q_triad *qt)
{
struct device *dev = fbn->netdev->dev.parent;
int err;
- err = fbnic_alloc_rx_ring_resources(fbn, &qt->sub0);
+ err = xdp_rxq_info_reg(&qt->xdp_rxq, fbn->netdev, qt->sub0.q_idx,
+ nv->napi.napi_id);
if (err)
return err;
+ /* Register XDP memory model for completion queue */
+ err = xdp_rxq_info_reg_mem_model(&qt->xdp_rxq, MEM_TYPE_PAGE_POOL,
+ qt->sub0.page_pool);
+ if (err)
+ goto unreg_rxq;
+
+ err = fbnic_alloc_rx_ring_resources(fbn, &qt->sub0);
+ if (err)
+ goto unreg_mm;
+
err = fbnic_alloc_rx_ring_resources(fbn, &qt->sub1);
if (err)
goto free_sub0;
@@ -1991,22 +2000,20 @@ static int fbnic_alloc_rx_qt_resources(struct fbnic_net *fbn,
fbnic_free_ring_resources(dev, &qt->sub1);
free_sub0:
fbnic_free_ring_resources(dev, &qt->sub0);
+unreg_mm:
+ xdp_rxq_info_unreg_mem_model(&qt->xdp_rxq);
+unreg_rxq:
+ xdp_rxq_info_unreg(&qt->xdp_rxq);
return err;
}
static void fbnic_free_nv_resources(struct fbnic_net *fbn,
struct fbnic_napi_vector *nv)
{
- int i, j;
+ int i;
- /* Free Tx Resources */
- for (i = 0; i < nv->txt_count; i++)
+ for (i = 0; i < nv->txt_count + nv->rxt_count; i++)
fbnic_free_qt_resources(fbn, &nv->qt[i]);
-
- for (j = 0; j < nv->rxt_count; j++, i++) {
- fbnic_free_qt_resources(fbn, &nv->qt[i]);
- xdp_rxq_info_unreg_mem_model(&nv->qt[i].xdp_rxq);
- }
}
static int fbnic_alloc_nv_resources(struct fbnic_net *fbn,
@@ -2023,26 +2030,13 @@ static int fbnic_alloc_nv_resources(struct fbnic_net *fbn,
/* Allocate Rx Resources */
for (j = 0; j < nv->rxt_count; j++, i++) {
- /* Register XDP memory model for completion queue */
- err = xdp_reg_mem_model(&nv->qt[i].xdp_rxq.mem,
- MEM_TYPE_PAGE_POOL,
- nv->qt[i].sub0.page_pool);
+ err = fbnic_alloc_rx_qt_resources(fbn, nv, &nv->qt[i]);
if (err)
- goto xdp_unreg_mem_model;
-
- err = fbnic_alloc_rx_qt_resources(fbn, &nv->qt[i]);
- if (err)
- goto xdp_unreg_cur_model;
+ goto free_qt_resources;
}
return 0;
-xdp_unreg_mem_model:
- while (j-- && i--) {
- fbnic_free_qt_resources(fbn, &nv->qt[i]);
-xdp_unreg_cur_model:
- xdp_rxq_info_unreg_mem_model(&nv->qt[i].xdp_rxq);
- }
free_qt_resources:
while (i--)
fbnic_free_qt_resources(fbn, &nv->qt[i]);
--
2.50.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH net-next 04/15] eth: fbnic: move page pool alloc to fbnic_alloc_rx_qt_resources()
2025-08-20 2:56 [PATCH net-next 00/15] eth: fbnic: support queue API and zero-copy Rx Jakub Kicinski
` (2 preceding siblings ...)
2025-08-20 2:56 ` [PATCH net-next 03/15] eth: fbnic: move xdp_rxq_info_reg() to resource alloc Jakub Kicinski
@ 2025-08-20 2:56 ` Jakub Kicinski
2025-08-20 2:56 ` [PATCH net-next 05/15] eth: fbnic: use netmem_ref where applicable Jakub Kicinski
` (12 subsequent siblings)
16 siblings, 0 replies; 33+ messages in thread
From: Jakub Kicinski @ 2025-08-20 2:56 UTC (permalink / raw)
To: davem
Cc: netdev, edumazet, pabeni, andrew+netdev, horms, almasrymina,
michael.chan, tariqt, dtatulea, hawk, ilias.apalodimas,
alexanderduyck, sdf, Jakub Kicinski
page pools are now at the ring level, move page pool alloc
to fbnic_alloc_rx_qt_resources(), and freeing to
fbnic_free_qt_resources().
This significantly simplifies fbnic_alloc_napi_vector() error
handling, by removing a late failure point.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
drivers/net/ethernet/meta/fbnic/fbnic_txrx.c | 37 +++++---------------
1 file changed, 9 insertions(+), 28 deletions(-)
diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
index 29a780f72c14..15ebbaa0bed2 100644
--- a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
+++ b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
@@ -1485,7 +1485,6 @@ static void fbnic_free_napi_vector(struct fbnic_net *fbn,
fbnic_remove_rx_ring(fbn, &nv->qt[i].sub0);
fbnic_remove_rx_ring(fbn, &nv->qt[i].sub1);
fbnic_remove_rx_ring(fbn, &nv->qt[i].cmpl);
- fbnic_free_qt_page_pools(&nv->qt[i]);
}
fbnic_napi_free_irq(fbd, nv);
@@ -1681,10 +1680,6 @@ static int fbnic_alloc_napi_vector(struct fbnic_dev *fbd, struct fbnic_net *fbn,
fbnic_ring_init(&qt->cmpl, db, rxq_idx, FBNIC_RING_F_STATS);
fbn->rx[rxq_idx] = &qt->cmpl;
- err = fbnic_alloc_qt_page_pools(fbn, nv, qt);
- if (err)
- goto free_ring_cur_qt;
-
/* Update Rx queue index */
rxt_count--;
rxq_idx += v_count;
@@ -1695,26 +1690,6 @@ static int fbnic_alloc_napi_vector(struct fbnic_dev *fbd, struct fbnic_net *fbn,
return 0;
- while (rxt_count < nv->rxt_count) {
- qt--;
-
- fbnic_free_qt_page_pools(qt);
-free_ring_cur_qt:
- fbnic_remove_rx_ring(fbn, &qt->sub0);
- fbnic_remove_rx_ring(fbn, &qt->sub1);
- fbnic_remove_rx_ring(fbn, &qt->cmpl);
- rxt_count++;
- }
- while (txt_count < nv->txt_count) {
- qt--;
-
- fbnic_remove_tx_ring(fbn, &qt->sub0);
- fbnic_remove_xdp_ring(fbn, &qt->sub1);
- fbnic_remove_tx_ring(fbn, &qt->cmpl);
-
- txt_count++;
- }
- fbnic_napi_free_irq(fbd, nv);
napi_del:
netif_napi_del(&nv->napi);
fbn->napi[fbnic_napi_idx(nv)] = NULL;
@@ -1934,6 +1909,7 @@ static void fbnic_free_qt_resources(struct fbnic_net *fbn,
if (xdp_rxq_info_is_reg(&qt->xdp_rxq)) {
xdp_rxq_info_unreg_mem_model(&qt->xdp_rxq);
xdp_rxq_info_unreg(&qt->xdp_rxq);
+ fbnic_free_qt_page_pools(qt);
}
}
@@ -1971,12 +1947,15 @@ static int fbnic_alloc_rx_qt_resources(struct fbnic_net *fbn,
struct device *dev = fbn->netdev->dev.parent;
int err;
- err = xdp_rxq_info_reg(&qt->xdp_rxq, fbn->netdev, qt->sub0.q_idx,
- nv->napi.napi_id);
+ err = fbnic_alloc_qt_page_pools(fbn, nv, qt);
if (err)
return err;
- /* Register XDP memory model for completion queue */
+ err = xdp_rxq_info_reg(&qt->xdp_rxq, fbn->netdev, qt->sub0.q_idx,
+ nv->napi.napi_id);
+ if (err)
+ goto free_page_pools;
+
err = xdp_rxq_info_reg_mem_model(&qt->xdp_rxq, MEM_TYPE_PAGE_POOL,
qt->sub0.page_pool);
if (err)
@@ -2004,6 +1983,8 @@ static int fbnic_alloc_rx_qt_resources(struct fbnic_net *fbn,
xdp_rxq_info_unreg_mem_model(&qt->xdp_rxq);
unreg_rxq:
xdp_rxq_info_unreg(&qt->xdp_rxq);
+free_page_pools:
+ fbnic_free_qt_page_pools(qt);
return err;
}
--
2.50.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH net-next 05/15] eth: fbnic: use netmem_ref where applicable
2025-08-20 2:56 [PATCH net-next 00/15] eth: fbnic: support queue API and zero-copy Rx Jakub Kicinski
` (3 preceding siblings ...)
2025-08-20 2:56 ` [PATCH net-next 04/15] eth: fbnic: move page pool alloc to fbnic_alloc_rx_qt_resources() Jakub Kicinski
@ 2025-08-20 2:56 ` Jakub Kicinski
2025-08-20 23:22 ` Mina Almasry
2025-08-20 2:56 ` [PATCH net-next 06/15] eth: fbnic: request ops lock Jakub Kicinski
` (11 subsequent siblings)
16 siblings, 1 reply; 33+ messages in thread
From: Jakub Kicinski @ 2025-08-20 2:56 UTC (permalink / raw)
To: davem
Cc: netdev, edumazet, pabeni, andrew+netdev, horms, almasrymina,
michael.chan, tariqt, dtatulea, hawk, ilias.apalodimas,
alexanderduyck, sdf, Jakub Kicinski
Use netmem_ref instead of struct page pointer in prep for
unreadable memory. fbnic has separate free buffer submission
queues for headers and for data. Refactor the helper which
returns page pointer for a submission buffer to take the
high level queue container, create a separate handler
for header and payload rings. This ties the "upcast" from
netmem to system page to use of sub0 which we know has
system pages.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
drivers/net/ethernet/meta/fbnic/fbnic_txrx.h | 2 +-
drivers/net/ethernet/meta/fbnic/fbnic_txrx.c | 65 ++++++++++++--------
2 files changed, 40 insertions(+), 27 deletions(-)
diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.h b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.h
index a935a1acfb3e..58ae7f9c8f54 100644
--- a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.h
+++ b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.h
@@ -100,7 +100,7 @@ struct fbnic_queue_stats {
#define FBNIC_PAGECNT_BIAS_MAX PAGE_SIZE
struct fbnic_rx_buf {
- struct page *page;
+ netmem_ref netmem;
long pagecnt_bias;
};
diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
index 15ebbaa0bed2..8dbe83bc2be1 100644
--- a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
+++ b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
@@ -715,35 +715,47 @@ static void fbnic_clean_tsq(struct fbnic_napi_vector *nv,
}
static void fbnic_page_pool_init(struct fbnic_ring *ring, unsigned int idx,
- struct page *page)
+ netmem_ref netmem)
{
struct fbnic_rx_buf *rx_buf = &ring->rx_buf[idx];
- page_pool_fragment_page(page, FBNIC_PAGECNT_BIAS_MAX);
+ page_pool_fragment_netmem(netmem, FBNIC_PAGECNT_BIAS_MAX);
rx_buf->pagecnt_bias = FBNIC_PAGECNT_BIAS_MAX;
- rx_buf->page = page;
+ rx_buf->netmem = netmem;
}
-static struct page *fbnic_page_pool_get(struct fbnic_ring *ring,
- unsigned int idx)
+static struct page *
+fbnic_page_pool_get_head(struct fbnic_q_triad *qt, unsigned int idx)
{
- struct fbnic_rx_buf *rx_buf = &ring->rx_buf[idx];
+ struct fbnic_rx_buf *rx_buf = &qt->sub0.rx_buf[idx];
rx_buf->pagecnt_bias--;
- return rx_buf->page;
+ /* sub0 is always fed system pages, from the NAPI-level page_pool */
+ return netmem_to_page(rx_buf->netmem);
+}
+
+static netmem_ref
+fbnic_page_pool_get_data(struct fbnic_q_triad *qt, unsigned int idx)
+{
+ struct fbnic_rx_buf *rx_buf = &qt->sub1.rx_buf[idx];
+
+ rx_buf->pagecnt_bias--;
+
+ return rx_buf->netmem;
}
static void fbnic_page_pool_drain(struct fbnic_ring *ring, unsigned int idx,
int budget)
{
struct fbnic_rx_buf *rx_buf = &ring->rx_buf[idx];
- struct page *page = rx_buf->page;
+ netmem_ref netmem = rx_buf->netmem;
- if (!page_pool_unref_page(page, rx_buf->pagecnt_bias))
- page_pool_put_unrefed_page(ring->page_pool, page, -1, !!budget);
+ if (!page_pool_unref_netmem(netmem, rx_buf->pagecnt_bias))
+ page_pool_put_unrefed_netmem(ring->page_pool, netmem, -1,
+ !!budget);
- rx_buf->page = NULL;
+ rx_buf->netmem = 0;
}
static void fbnic_clean_twq(struct fbnic_napi_vector *nv, int napi_budget,
@@ -844,10 +856,10 @@ static void fbnic_clean_bdq(struct fbnic_ring *ring, unsigned int hw_head,
ring->head = head;
}
-static void fbnic_bd_prep(struct fbnic_ring *bdq, u16 id, struct page *page)
+static void fbnic_bd_prep(struct fbnic_ring *bdq, u16 id, netmem_ref netmem)
{
__le64 *bdq_desc = &bdq->desc[id * FBNIC_BD_FRAG_COUNT];
- dma_addr_t dma = page_pool_get_dma_addr(page);
+ dma_addr_t dma = page_pool_get_dma_addr_netmem(netmem);
u64 bd, i = FBNIC_BD_FRAG_COUNT;
bd = (FBNIC_BD_PAGE_ADDR_MASK & dma) |
@@ -874,10 +886,10 @@ static void fbnic_fill_bdq(struct fbnic_ring *bdq)
return;
do {
- struct page *page;
+ netmem_ref netmem;
- page = page_pool_dev_alloc_pages(bdq->page_pool);
- if (!page) {
+ netmem = page_pool_dev_alloc_netmems(bdq->page_pool);
+ if (!netmem) {
u64_stats_update_begin(&bdq->stats.syncp);
bdq->stats.rx.alloc_failed++;
u64_stats_update_end(&bdq->stats.syncp);
@@ -885,8 +897,8 @@ static void fbnic_fill_bdq(struct fbnic_ring *bdq)
break;
}
- fbnic_page_pool_init(bdq, i, page);
- fbnic_bd_prep(bdq, i, page);
+ fbnic_page_pool_init(bdq, i, netmem);
+ fbnic_bd_prep(bdq, i, netmem);
i++;
i &= bdq->size_mask;
@@ -933,7 +945,7 @@ static void fbnic_pkt_prepare(struct fbnic_napi_vector *nv, u64 rcd,
{
unsigned int hdr_pg_idx = FIELD_GET(FBNIC_RCD_AL_BUFF_PAGE_MASK, rcd);
unsigned int hdr_pg_off = FIELD_GET(FBNIC_RCD_AL_BUFF_OFF_MASK, rcd);
- struct page *page = fbnic_page_pool_get(&qt->sub0, hdr_pg_idx);
+ struct page *page = fbnic_page_pool_get_head(qt, hdr_pg_idx);
unsigned int len = FIELD_GET(FBNIC_RCD_AL_BUFF_LEN_MASK, rcd);
unsigned int frame_sz, hdr_pg_start, hdr_pg_end, headroom;
unsigned char *hdr_start;
@@ -974,7 +986,7 @@ static void fbnic_add_rx_frag(struct fbnic_napi_vector *nv, u64 rcd,
unsigned int pg_idx = FIELD_GET(FBNIC_RCD_AL_BUFF_PAGE_MASK, rcd);
unsigned int pg_off = FIELD_GET(FBNIC_RCD_AL_BUFF_OFF_MASK, rcd);
unsigned int len = FIELD_GET(FBNIC_RCD_AL_BUFF_LEN_MASK, rcd);
- struct page *page = fbnic_page_pool_get(&qt->sub1, pg_idx);
+ netmem_ref netmem = fbnic_page_pool_get_data(qt, pg_idx);
unsigned int truesize;
bool added;
@@ -985,11 +997,11 @@ static void fbnic_add_rx_frag(struct fbnic_napi_vector *nv, u64 rcd,
FBNIC_BD_FRAG_SIZE;
/* Sync DMA buffer */
- dma_sync_single_range_for_cpu(nv->dev, page_pool_get_dma_addr(page),
+ dma_sync_single_range_for_cpu(nv->dev,
+ page_pool_get_dma_addr_netmem(netmem),
pg_off, truesize, DMA_BIDIRECTIONAL);
- added = xdp_buff_add_frag(&pkt->buff, page_to_netmem(page), pg_off, len,
- truesize);
+ added = xdp_buff_add_frag(&pkt->buff, netmem, pg_off, len, truesize);
if (unlikely(!added)) {
pkt->add_frag_failed = true;
netdev_err_once(nv->napi.dev,
@@ -1007,15 +1019,16 @@ static void fbnic_put_pkt_buff(struct fbnic_q_triad *qt,
if (xdp_buff_has_frags(&pkt->buff)) {
struct skb_shared_info *shinfo;
+ netmem_ref netmem;
int nr_frags;
shinfo = xdp_get_shared_info_from_buff(&pkt->buff);
nr_frags = shinfo->nr_frags;
while (nr_frags--) {
- page = skb_frag_page(&shinfo->frags[nr_frags]);
- page_pool_put_full_page(qt->sub1.page_pool, page,
- !!budget);
+ netmem = skb_frag_netmem(&shinfo->frags[nr_frags]);
+ page_pool_put_full_netmem(qt->sub1.page_pool, netmem,
+ !!budget);
}
}
--
2.50.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* Re: [PATCH net-next 05/15] eth: fbnic: use netmem_ref where applicable
2025-08-20 2:56 ` [PATCH net-next 05/15] eth: fbnic: use netmem_ref where applicable Jakub Kicinski
@ 2025-08-20 23:22 ` Mina Almasry
0 siblings, 0 replies; 33+ messages in thread
From: Mina Almasry @ 2025-08-20 23:22 UTC (permalink / raw)
To: Jakub Kicinski
Cc: davem, netdev, edumazet, pabeni, andrew+netdev, horms,
michael.chan, tariqt, dtatulea, hawk, ilias.apalodimas,
alexanderduyck, sdf
On Tue, Aug 19, 2025 at 7:57 PM Jakub Kicinski <kuba@kernel.org> wrote:
>
> Use netmem_ref instead of struct page pointer in prep for
> unreadable memory. fbnic has separate free buffer submission
> queues for headers and for data. Refactor the helper which
> returns page pointer for a submission buffer to take the
> high level queue container, create a separate handler
> for header and payload rings. This ties the "upcast" from
> netmem to system page to use of sub0 which we know has
> system pages.
>
> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Reviewed-by: Mina Almasry <almasrymina@google.com>
--
Thanks,
Mina
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH net-next 06/15] eth: fbnic: request ops lock
2025-08-20 2:56 [PATCH net-next 00/15] eth: fbnic: support queue API and zero-copy Rx Jakub Kicinski
` (4 preceding siblings ...)
2025-08-20 2:56 ` [PATCH net-next 05/15] eth: fbnic: use netmem_ref where applicable Jakub Kicinski
@ 2025-08-20 2:56 ` Jakub Kicinski
2025-08-20 2:56 ` [PATCH net-next 07/15] eth: fbnic: split fbnic_disable() Jakub Kicinski
` (10 subsequent siblings)
16 siblings, 0 replies; 33+ messages in thread
From: Jakub Kicinski @ 2025-08-20 2:56 UTC (permalink / raw)
To: davem
Cc: netdev, edumazet, pabeni, andrew+netdev, horms, almasrymina,
michael.chan, tariqt, dtatulea, hawk, ilias.apalodimas,
alexanderduyck, sdf, Jakub Kicinski
We'll add queue ops soon so. queue ops will opt the driver into
extra locking. Request this locking explicitly already to make
future patches smaller and easier to review.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
drivers/net/ethernet/meta/fbnic/fbnic_netdev.c | 2 ++
drivers/net/ethernet/meta/fbnic/fbnic_pci.c | 9 ++++++++-
drivers/net/ethernet/meta/fbnic/fbnic_txrx.c | 15 ++++++++-------
3 files changed, 18 insertions(+), 8 deletions(-)
diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_netdev.c b/drivers/net/ethernet/meta/fbnic/fbnic_netdev.c
index b8b684ad376b..37c900ce8257 100644
--- a/drivers/net/ethernet/meta/fbnic/fbnic_netdev.c
+++ b/drivers/net/ethernet/meta/fbnic/fbnic_netdev.c
@@ -750,6 +750,8 @@ struct net_device *fbnic_netdev_alloc(struct fbnic_dev *fbd)
fbnic_set_ethtool_ops(netdev);
+ netdev->request_ops_lock = true;
+
fbn = netdev_priv(netdev);
fbn->netdev = netdev;
diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_pci.c b/drivers/net/ethernet/meta/fbnic/fbnic_pci.c
index b70e4cadb37b..bc51e1e4846e 100644
--- a/drivers/net/ethernet/meta/fbnic/fbnic_pci.c
+++ b/drivers/net/ethernet/meta/fbnic/fbnic_pci.c
@@ -206,8 +206,11 @@ static void fbnic_service_task(struct work_struct *work)
fbnic_health_check(fbd);
- if (netif_carrier_ok(fbd->netdev))
+ if (netif_carrier_ok(fbd->netdev)) {
+ netdev_lock(fbd->netdev);
fbnic_napi_depletion_check(fbd->netdev);
+ netdev_unlock(fbd->netdev);
+ }
if (netif_running(fbd->netdev))
schedule_delayed_work(&fbd->service_task, HZ);
@@ -392,12 +395,14 @@ static int fbnic_pm_suspend(struct device *dev)
goto null_uc_addr;
rtnl_lock();
+ netdev_lock(netdev);
netif_device_detach(netdev);
if (netif_running(netdev))
netdev->netdev_ops->ndo_stop(netdev);
+ netdev_unlock(netdev);
rtnl_unlock();
null_uc_addr:
@@ -463,6 +468,7 @@ static int __fbnic_pm_resume(struct device *dev)
fbnic_reset_queues(fbn, fbn->num_tx_queues, fbn->num_rx_queues);
rtnl_lock();
+ netdev_lock(netdev);
if (netif_running(netdev)) {
err = __fbnic_open(fbn);
@@ -470,6 +476,7 @@ static int __fbnic_pm_resume(struct device *dev)
goto err_free_mbx;
}
+ netdev_unlock(netdev);
rtnl_unlock();
return 0;
diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
index 8dbe83bc2be1..dc0735b20739 100644
--- a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
+++ b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
@@ -1501,7 +1501,7 @@ static void fbnic_free_napi_vector(struct fbnic_net *fbn,
}
fbnic_napi_free_irq(fbd, nv);
- netif_napi_del(&nv->napi);
+ netif_napi_del_locked(&nv->napi);
fbn->napi[fbnic_napi_idx(nv)] = NULL;
kfree(nv);
}
@@ -1611,11 +1611,12 @@ static int fbnic_alloc_napi_vector(struct fbnic_dev *fbd, struct fbnic_net *fbn,
/* Tie napi to netdev */
fbn->napi[fbnic_napi_idx(nv)] = nv;
- netif_napi_add(fbn->netdev, &nv->napi, fbnic_poll);
+ netif_napi_add_locked(fbn->netdev, &nv->napi, fbnic_poll);
/* Record IRQ to NAPI struct */
- netif_napi_set_irq(&nv->napi,
- pci_irq_vector(to_pci_dev(fbd->dev), nv->v_idx));
+ netif_napi_set_irq_locked(&nv->napi,
+ pci_irq_vector(to_pci_dev(fbd->dev),
+ nv->v_idx));
/* Tie nv back to PCIe dev */
nv->dev = fbd->dev;
@@ -1704,7 +1705,7 @@ static int fbnic_alloc_napi_vector(struct fbnic_dev *fbd, struct fbnic_net *fbn,
return 0;
napi_del:
- netif_napi_del(&nv->napi);
+ netif_napi_del_locked(&nv->napi);
fbn->napi[fbnic_napi_idx(nv)] = NULL;
kfree(nv);
return err;
@@ -2173,7 +2174,7 @@ void fbnic_napi_disable(struct fbnic_net *fbn)
int i;
for (i = 0; i < fbn->num_napi; i++) {
- napi_disable(&fbn->napi[i]->napi);
+ napi_disable_locked(&fbn->napi[i]->napi);
fbnic_nv_irq_disable(fbn->napi[i]);
}
@@ -2621,7 +2622,7 @@ void fbnic_napi_enable(struct fbnic_net *fbn)
for (i = 0; i < fbn->num_napi; i++) {
struct fbnic_napi_vector *nv = fbn->napi[i];
- napi_enable(&nv->napi);
+ napi_enable_locked(&nv->napi);
fbnic_nv_irq_enable(nv);
--
2.50.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH net-next 07/15] eth: fbnic: split fbnic_disable()
2025-08-20 2:56 [PATCH net-next 00/15] eth: fbnic: support queue API and zero-copy Rx Jakub Kicinski
` (5 preceding siblings ...)
2025-08-20 2:56 ` [PATCH net-next 06/15] eth: fbnic: request ops lock Jakub Kicinski
@ 2025-08-20 2:56 ` Jakub Kicinski
2025-08-20 2:56 ` [PATCH net-next 08/15] eth: fbnic: split fbnic_flush() Jakub Kicinski
` (9 subsequent siblings)
16 siblings, 0 replies; 33+ messages in thread
From: Jakub Kicinski @ 2025-08-20 2:56 UTC (permalink / raw)
To: davem
Cc: netdev, edumazet, pabeni, andrew+netdev, horms, almasrymina,
michael.chan, tariqt, dtatulea, hawk, ilias.apalodimas,
alexanderduyck, sdf, Jakub Kicinski
Factor out handling a single nv from fbnic_disable() to make
it reusable for queue ops. Use a __ prefix for the factored
out code. The real fbnic_nv_disable() which will include
fbnic_wrfl() will be added with the qops, to avoid unused
function warnings.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
drivers/net/ethernet/meta/fbnic/fbnic_txrx.c | 46 +++++++++++---------
1 file changed, 25 insertions(+), 21 deletions(-)
diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
index dc0735b20739..7d6bf35acfd4 100644
--- a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
+++ b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
@@ -2180,31 +2180,35 @@ void fbnic_napi_disable(struct fbnic_net *fbn)
}
}
+static void __fbnic_nv_disable(struct fbnic_napi_vector *nv)
+{
+ int i, t;
+
+ /* Disable Tx queue triads */
+ for (t = 0; t < nv->txt_count; t++) {
+ struct fbnic_q_triad *qt = &nv->qt[t];
+
+ fbnic_disable_twq0(&qt->sub0);
+ fbnic_disable_twq1(&qt->sub1);
+ fbnic_disable_tcq(&qt->cmpl);
+ }
+
+ /* Disable Rx queue triads */
+ for (i = 0; i < nv->rxt_count; i++, t++) {
+ struct fbnic_q_triad *qt = &nv->qt[t];
+
+ fbnic_disable_bdq(&qt->sub0, &qt->sub1);
+ fbnic_disable_rcq(&qt->cmpl);
+ }
+}
+
void fbnic_disable(struct fbnic_net *fbn)
{
struct fbnic_dev *fbd = fbn->fbd;
- int i, j, t;
+ int i;
- for (i = 0; i < fbn->num_napi; i++) {
- struct fbnic_napi_vector *nv = fbn->napi[i];
-
- /* Disable Tx queue triads */
- for (t = 0; t < nv->txt_count; t++) {
- struct fbnic_q_triad *qt = &nv->qt[t];
-
- fbnic_disable_twq0(&qt->sub0);
- fbnic_disable_twq1(&qt->sub1);
- fbnic_disable_tcq(&qt->cmpl);
- }
-
- /* Disable Rx queue triads */
- for (j = 0; j < nv->rxt_count; j++, t++) {
- struct fbnic_q_triad *qt = &nv->qt[t];
-
- fbnic_disable_bdq(&qt->sub0, &qt->sub1);
- fbnic_disable_rcq(&qt->cmpl);
- }
- }
+ for (i = 0; i < fbn->num_napi; i++)
+ __fbnic_nv_disable(fbn->napi[i]);
fbnic_wrfl(fbd);
}
--
2.50.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH net-next 08/15] eth: fbnic: split fbnic_flush()
2025-08-20 2:56 [PATCH net-next 00/15] eth: fbnic: support queue API and zero-copy Rx Jakub Kicinski
` (6 preceding siblings ...)
2025-08-20 2:56 ` [PATCH net-next 07/15] eth: fbnic: split fbnic_disable() Jakub Kicinski
@ 2025-08-20 2:56 ` Jakub Kicinski
2025-08-20 2:56 ` [PATCH net-next 09/15] eth: fbnic: split fbnic_enable() Jakub Kicinski
` (8 subsequent siblings)
16 siblings, 0 replies; 33+ messages in thread
From: Jakub Kicinski @ 2025-08-20 2:56 UTC (permalink / raw)
To: davem
Cc: netdev, edumazet, pabeni, andrew+netdev, horms, almasrymina,
michael.chan, tariqt, dtatulea, hawk, ilias.apalodimas,
alexanderduyck, sdf, Jakub Kicinski
Factor out handling a single nv from fbnic_flush() to make
it reusable for queue ops.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
drivers/net/ethernet/meta/fbnic/fbnic_txrx.c | 87 ++++++++++----------
1 file changed, 45 insertions(+), 42 deletions(-)
diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
index 7d6bf35acfd4..8384e73b4492 100644
--- a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
+++ b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
@@ -2297,52 +2297,55 @@ int fbnic_wait_all_queues_idle(struct fbnic_dev *fbd, bool may_fail)
return err;
}
+static void fbnic_nv_flush(struct fbnic_napi_vector *nv)
+{
+ int j, t;
+
+ /* Flush any processed Tx Queue Triads and drop the rest */
+ for (t = 0; t < nv->txt_count; t++) {
+ struct fbnic_q_triad *qt = &nv->qt[t];
+ struct netdev_queue *tx_queue;
+
+ /* Clean the work queues of unprocessed work */
+ fbnic_clean_twq0(nv, 0, &qt->sub0, true, qt->sub0.tail);
+ fbnic_clean_twq1(nv, false, &qt->sub1, true,
+ qt->sub1.tail);
+
+ /* Reset completion queue descriptor ring */
+ memset(qt->cmpl.desc, 0, qt->cmpl.size);
+
+ /* Nothing else to do if Tx queue is disabled */
+ if (qt->sub0.flags & FBNIC_RING_F_DISABLED)
+ continue;
+
+ /* Reset BQL associated with Tx queue */
+ tx_queue = netdev_get_tx_queue(nv->napi.dev,
+ qt->sub0.q_idx);
+ netdev_tx_reset_queue(tx_queue);
+ }
+
+ /* Flush any processed Rx Queue Triads and drop the rest */
+ for (j = 0; j < nv->rxt_count; j++, t++) {
+ struct fbnic_q_triad *qt = &nv->qt[t];
+
+ /* Clean the work queues of unprocessed work */
+ fbnic_clean_bdq(&qt->sub0, qt->sub0.tail, 0);
+ fbnic_clean_bdq(&qt->sub1, qt->sub1.tail, 0);
+
+ /* Reset completion queue descriptor ring */
+ memset(qt->cmpl.desc, 0, qt->cmpl.size);
+
+ fbnic_put_pkt_buff(qt, qt->cmpl.pkt, 0);
+ memset(qt->cmpl.pkt, 0, sizeof(struct fbnic_pkt_buff));
+ }
+}
+
void fbnic_flush(struct fbnic_net *fbn)
{
int i;
- for (i = 0; i < fbn->num_napi; i++) {
- struct fbnic_napi_vector *nv = fbn->napi[i];
- int j, t;
-
- /* Flush any processed Tx Queue Triads and drop the rest */
- for (t = 0; t < nv->txt_count; t++) {
- struct fbnic_q_triad *qt = &nv->qt[t];
- struct netdev_queue *tx_queue;
-
- /* Clean the work queues of unprocessed work */
- fbnic_clean_twq0(nv, 0, &qt->sub0, true, qt->sub0.tail);
- fbnic_clean_twq1(nv, false, &qt->sub1, true,
- qt->sub1.tail);
-
- /* Reset completion queue descriptor ring */
- memset(qt->cmpl.desc, 0, qt->cmpl.size);
-
- /* Nothing else to do if Tx queue is disabled */
- if (qt->sub0.flags & FBNIC_RING_F_DISABLED)
- continue;
-
- /* Reset BQL associated with Tx queue */
- tx_queue = netdev_get_tx_queue(nv->napi.dev,
- qt->sub0.q_idx);
- netdev_tx_reset_queue(tx_queue);
- }
-
- /* Flush any processed Rx Queue Triads and drop the rest */
- for (j = 0; j < nv->rxt_count; j++, t++) {
- struct fbnic_q_triad *qt = &nv->qt[t];
-
- /* Clean the work queues of unprocessed work */
- fbnic_clean_bdq(&qt->sub0, qt->sub0.tail, 0);
- fbnic_clean_bdq(&qt->sub1, qt->sub1.tail, 0);
-
- /* Reset completion queue descriptor ring */
- memset(qt->cmpl.desc, 0, qt->cmpl.size);
-
- fbnic_put_pkt_buff(qt, qt->cmpl.pkt, 0);
- memset(qt->cmpl.pkt, 0, sizeof(struct fbnic_pkt_buff));
- }
- }
+ for (i = 0; i < fbn->num_napi; i++)
+ fbnic_nv_flush(fbn->napi[i]);
}
void fbnic_fill(struct fbnic_net *fbn)
--
2.50.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH net-next 09/15] eth: fbnic: split fbnic_enable()
2025-08-20 2:56 [PATCH net-next 00/15] eth: fbnic: support queue API and zero-copy Rx Jakub Kicinski
` (7 preceding siblings ...)
2025-08-20 2:56 ` [PATCH net-next 08/15] eth: fbnic: split fbnic_flush() Jakub Kicinski
@ 2025-08-20 2:56 ` Jakub Kicinski
2025-08-20 2:56 ` [PATCH net-next 10/15] eth: fbnic: split fbnic_fill() Jakub Kicinski
` (7 subsequent siblings)
16 siblings, 0 replies; 33+ messages in thread
From: Jakub Kicinski @ 2025-08-20 2:56 UTC (permalink / raw)
To: davem
Cc: netdev, edumazet, pabeni, andrew+netdev, horms, almasrymina,
michael.chan, tariqt, dtatulea, hawk, ilias.apalodimas,
alexanderduyck, sdf, Jakub Kicinski
Factor out handling a single nv from fbnic_enable() to make
it reusable for queue ops. Use a __ prefix for the factored
out code. The real fbnic_nv_enable() which will include
fbnic_wrfl() will be added with the qops, to avoid unused
function warnings.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
drivers/net/ethernet/meta/fbnic/fbnic_txrx.c | 47 +++++++++++---------
1 file changed, 25 insertions(+), 22 deletions(-)
diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
index 8384e73b4492..38dd1afb7005 100644
--- a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
+++ b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
@@ -2584,33 +2584,36 @@ static void fbnic_enable_rcq(struct fbnic_napi_vector *nv,
fbnic_ring_wr32(rcq, FBNIC_QUEUE_RCQ_CTL, FBNIC_QUEUE_RCQ_CTL_ENABLE);
}
+static void __fbnic_nv_enable(struct fbnic_napi_vector *nv)
+{
+ int j, t;
+
+ /* Setup Tx Queue Triads */
+ for (t = 0; t < nv->txt_count; t++) {
+ struct fbnic_q_triad *qt = &nv->qt[t];
+
+ fbnic_enable_twq0(&qt->sub0);
+ fbnic_enable_twq1(&qt->sub1);
+ fbnic_enable_tcq(nv, &qt->cmpl);
+ }
+
+ /* Setup Rx Queue Triads */
+ for (j = 0; j < nv->rxt_count; j++, t++) {
+ struct fbnic_q_triad *qt = &nv->qt[t];
+
+ fbnic_enable_bdq(&qt->sub0, &qt->sub1);
+ fbnic_config_drop_mode_rcq(nv, &qt->cmpl);
+ fbnic_enable_rcq(nv, &qt->cmpl);
+ }
+}
+
void fbnic_enable(struct fbnic_net *fbn)
{
struct fbnic_dev *fbd = fbn->fbd;
int i;
- for (i = 0; i < fbn->num_napi; i++) {
- struct fbnic_napi_vector *nv = fbn->napi[i];
- int j, t;
-
- /* Setup Tx Queue Triads */
- for (t = 0; t < nv->txt_count; t++) {
- struct fbnic_q_triad *qt = &nv->qt[t];
-
- fbnic_enable_twq0(&qt->sub0);
- fbnic_enable_twq1(&qt->sub1);
- fbnic_enable_tcq(nv, &qt->cmpl);
- }
-
- /* Setup Rx Queue Triads */
- for (j = 0; j < nv->rxt_count; j++, t++) {
- struct fbnic_q_triad *qt = &nv->qt[t];
-
- fbnic_enable_bdq(&qt->sub0, &qt->sub1);
- fbnic_config_drop_mode_rcq(nv, &qt->cmpl);
- fbnic_enable_rcq(nv, &qt->cmpl);
- }
- }
+ for (i = 0; i < fbn->num_napi; i++)
+ __fbnic_nv_enable(fbn->napi[i]);
fbnic_wrfl(fbd);
}
--
2.50.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH net-next 10/15] eth: fbnic: split fbnic_fill()
2025-08-20 2:56 [PATCH net-next 00/15] eth: fbnic: support queue API and zero-copy Rx Jakub Kicinski
` (8 preceding siblings ...)
2025-08-20 2:56 ` [PATCH net-next 09/15] eth: fbnic: split fbnic_enable() Jakub Kicinski
@ 2025-08-20 2:56 ` Jakub Kicinski
2025-08-20 2:57 ` [PATCH net-next 11/15] net: page_pool: add helper to pre-check if PP will be unreadable Jakub Kicinski
` (6 subsequent siblings)
16 siblings, 0 replies; 33+ messages in thread
From: Jakub Kicinski @ 2025-08-20 2:56 UTC (permalink / raw)
To: davem
Cc: netdev, edumazet, pabeni, andrew+netdev, horms, almasrymina,
michael.chan, tariqt, dtatulea, hawk, ilias.apalodimas,
alexanderduyck, sdf, Jakub Kicinski
Factor out handling a single nv from fbnic_fill() to make
it reusable for queue ops.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
drivers/net/ethernet/meta/fbnic/fbnic_txrx.c | 33 +++++++++++---------
1 file changed, 18 insertions(+), 15 deletions(-)
diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
index 38dd1afb7005..7694b25ef77d 100644
--- a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
+++ b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
@@ -2348,25 +2348,28 @@ void fbnic_flush(struct fbnic_net *fbn)
fbnic_nv_flush(fbn->napi[i]);
}
+static void fbnic_nv_fill(struct fbnic_napi_vector *nv)
+{
+ int j, t;
+
+ /* Configure NAPI mapping and populate pages
+ * in the BDQ rings to use for Rx
+ */
+ for (j = 0, t = nv->txt_count; j < nv->rxt_count; j++, t++) {
+ struct fbnic_q_triad *qt = &nv->qt[t];
+
+ /* Populate the header and payload BDQs */
+ fbnic_fill_bdq(&qt->sub0);
+ fbnic_fill_bdq(&qt->sub1);
+ }
+}
+
void fbnic_fill(struct fbnic_net *fbn)
{
int i;
- for (i = 0; i < fbn->num_napi; i++) {
- struct fbnic_napi_vector *nv = fbn->napi[i];
- int j, t;
-
- /* Configure NAPI mapping and populate pages
- * in the BDQ rings to use for Rx
- */
- for (j = 0, t = nv->txt_count; j < nv->rxt_count; j++, t++) {
- struct fbnic_q_triad *qt = &nv->qt[t];
-
- /* Populate the header and payload BDQs */
- fbnic_fill_bdq(&qt->sub0);
- fbnic_fill_bdq(&qt->sub1);
- }
- }
+ for (i = 0; i < fbn->num_napi; i++)
+ fbnic_nv_fill(fbn->napi[i]);
}
static void fbnic_enable_twq0(struct fbnic_ring *twq)
--
2.50.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH net-next 11/15] net: page_pool: add helper to pre-check if PP will be unreadable
2025-08-20 2:56 [PATCH net-next 00/15] eth: fbnic: support queue API and zero-copy Rx Jakub Kicinski
` (9 preceding siblings ...)
2025-08-20 2:56 ` [PATCH net-next 10/15] eth: fbnic: split fbnic_fill() Jakub Kicinski
@ 2025-08-20 2:57 ` Jakub Kicinski
2025-08-20 11:30 ` Dragos Tatulea
2025-08-20 2:57 ` [PATCH net-next 12/15] eth: fbnic: allocate unreadable page pool for the payloads Jakub Kicinski
` (5 subsequent siblings)
16 siblings, 1 reply; 33+ messages in thread
From: Jakub Kicinski @ 2025-08-20 2:57 UTC (permalink / raw)
To: davem
Cc: netdev, edumazet, pabeni, andrew+netdev, horms, almasrymina,
michael.chan, tariqt, dtatulea, hawk, ilias.apalodimas,
alexanderduyck, sdf, Jakub Kicinski
mlx5 pokes into the rxq state to check if the queue has a memory
provider, and therefore whether it may produce unreable mem.
Add a helper for doing this in the page pool API. fbnic will want
a similar thing (tho, for a slightly different reason).
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
include/net/page_pool/helpers.h | 9 +++++++++
drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 10 ++--------
net/core/page_pool.c | 8 ++++++++
3 files changed, 19 insertions(+), 8 deletions(-)
diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h
index aa3719f28216..307c2436fa12 100644
--- a/include/net/page_pool/helpers.h
+++ b/include/net/page_pool/helpers.h
@@ -505,6 +505,15 @@ static inline void page_pool_nid_changed(struct page_pool *pool, int new_nid)
page_pool_update_nid(pool, new_nid);
}
+bool __page_pool_rxq_wants_unreadable(struct net_device *dev, unsigned int qid);
+
+static inline bool
+page_pool_rxq_wants_unreadable(const struct page_pool_params *pp_params)
+{
+ return __page_pool_rxq_wants_unreadable(pp_params->netdev,
+ pp_params->queue_idx);
+}
+
static inline bool page_pool_is_unreadable(struct page_pool *pool)
{
return !!pool->mp_ops;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index 21bb88c5d3dc..cee96ded300e 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -42,6 +42,7 @@
#include <net/netdev_lock.h>
#include <net/netdev_queues.h>
#include <net/netdev_rx_queue.h>
+#include <net/page_pool/helpers.h>
#include <net/page_pool/types.h>
#include <net/pkt_sched.h>
#include <net/xdp_sock_drv.h>
@@ -777,13 +778,6 @@ static void mlx5e_rq_shampo_hd_info_free(struct mlx5e_rq *rq)
bitmap_free(rq->mpwqe.shampo->bitmap);
}
-static bool mlx5_rq_needs_separate_hd_pool(struct mlx5e_rq *rq)
-{
- struct netdev_rx_queue *rxq = __netif_get_rx_queue(rq->netdev, rq->ix);
-
- return !!rxq->mp_params.mp_ops;
-}
-
static int mlx5_rq_shampo_alloc(struct mlx5_core_dev *mdev,
struct mlx5e_params *params,
struct mlx5e_rq_param *rqp,
@@ -822,7 +816,7 @@ static int mlx5_rq_shampo_alloc(struct mlx5_core_dev *mdev,
hd_pool_size = (rq->mpwqe.shampo->hd_per_wqe * wq_size) /
MLX5E_SHAMPO_WQ_HEADER_PER_PAGE;
- if (mlx5_rq_needs_separate_hd_pool(rq)) {
+ if (__page_pool_rxq_wants_unreadable(rq->netdev, rq->ix)) {
/* Separate page pool for shampo headers */
struct page_pool_params pp_params = { };
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index 343a6cac21e3..9f087a6742c3 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -190,6 +190,14 @@ static void page_pool_struct_check(void)
PAGE_POOL_FRAG_GROUP_ALIGN);
}
+bool __page_pool_rxq_wants_unreadable(struct net_device *dev, unsigned int qid)
+{
+ struct netdev_rx_queue *rxq = __netif_get_rx_queue(dev, qid);
+
+ return !!rxq->mp_params.mp_ops;
+}
+EXPORT_SYMBOL(__page_pool_rxq_wants_unreadable);
+
static int page_pool_init(struct page_pool *pool,
const struct page_pool_params *params,
int cpuid)
--
2.50.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* Re: [PATCH net-next 11/15] net: page_pool: add helper to pre-check if PP will be unreadable
2025-08-20 2:57 ` [PATCH net-next 11/15] net: page_pool: add helper to pre-check if PP will be unreadable Jakub Kicinski
@ 2025-08-20 11:30 ` Dragos Tatulea
2025-08-20 14:52 ` Jakub Kicinski
0 siblings, 1 reply; 33+ messages in thread
From: Dragos Tatulea @ 2025-08-20 11:30 UTC (permalink / raw)
To: Jakub Kicinski, davem
Cc: netdev, edumazet, pabeni, andrew+netdev, horms, almasrymina,
michael.chan, tariqt, hawk, ilias.apalodimas, alexanderduyck, sdf
On Wed Aug 20, 2025 at 2:57 AM UTC, Jakub Kicinski wrote:
> mlx5 pokes into the rxq state to check if the queue has a memory
> provider, and therefore whether it may produce unreable mem.
> Add a helper for doing this in the page pool API. fbnic will want
> a similar thing (tho, for a slightly different reason).
>
Thanks for taking this up!
> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
> ---
> include/net/page_pool/helpers.h | 9 +++++++++
> drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 10 ++--------
> net/core/page_pool.c | 8 ++++++++
> 3 files changed, 19 insertions(+), 8 deletions(-)
>
> diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h
> index aa3719f28216..307c2436fa12 100644
> --- a/include/net/page_pool/helpers.h
> +++ b/include/net/page_pool/helpers.h
> @@ -505,6 +505,15 @@ static inline void page_pool_nid_changed(struct page_pool *pool, int new_nid)
> page_pool_update_nid(pool, new_nid);
> }
>
> +bool __page_pool_rxq_wants_unreadable(struct net_device *dev, unsigned int qid);
> +
> +static inline bool
> +page_pool_rxq_wants_unreadable(const struct page_pool_params *pp_params)
> +{
> + return __page_pool_rxq_wants_unreadable(pp_params->netdev,
> + pp_params->queue_idx);
> +}
> +
Why not do this in the caller and have just a
page_pool_rxq_wants_unreadable() instead? It does make the code more
succint in the next patch but it looks weird as a generic function.
Subjective opinion though.
Thanks,
Dragos
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH net-next 11/15] net: page_pool: add helper to pre-check if PP will be unreadable
2025-08-20 11:30 ` Dragos Tatulea
@ 2025-08-20 14:52 ` Jakub Kicinski
2025-08-20 17:45 ` Dragos Tatulea
0 siblings, 1 reply; 33+ messages in thread
From: Jakub Kicinski @ 2025-08-20 14:52 UTC (permalink / raw)
To: Dragos Tatulea
Cc: davem, netdev, edumazet, pabeni, andrew+netdev, horms,
almasrymina, michael.chan, tariqt, hawk, ilias.apalodimas,
alexanderduyck, sdf
On Wed, 20 Aug 2025 11:30:42 +0000 Dragos Tatulea wrote:
> > +bool __page_pool_rxq_wants_unreadable(struct net_device *dev, unsigned int qid);
> > +
> > +static inline bool
> > +page_pool_rxq_wants_unreadable(const struct page_pool_params *pp_params)
> > +{
> > + return __page_pool_rxq_wants_unreadable(pp_params->netdev,
> > + pp_params->queue_idx);
> > +}
> > +
> Why not do this in the caller and have just a
> page_pool_rxq_wants_unreadable() instead? It does make the code more
> succint in the next patch but it looks weird as a generic function.
> Subjective opinion though.
Do you mean remove the version of the helper which takes pp_params?
Yeah, dunno. I wrote the version that takes pp_params first.
I wanted the helper to live next to page_pool_is_unreadable().
If we remove the version that takes the pp_params, this helper makes
more sense as an rxq helper, in netdev_queues.h / netdev_rx_queue.c :
bool netif_rxq_has_unreadable_mp(dev, rxq_idx)
right?
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH net-next 11/15] net: page_pool: add helper to pre-check if PP will be unreadable
2025-08-20 14:52 ` Jakub Kicinski
@ 2025-08-20 17:45 ` Dragos Tatulea
0 siblings, 0 replies; 33+ messages in thread
From: Dragos Tatulea @ 2025-08-20 17:45 UTC (permalink / raw)
To: Jakub Kicinski
Cc: davem, netdev, edumazet, pabeni, andrew+netdev, horms,
almasrymina, michael.chan, tariqt, hawk, ilias.apalodimas,
alexanderduyck, sdf
On Wed, Aug 20, 2025 at 07:52:47AM -0700, Jakub Kicinski wrote:
> On Wed, 20 Aug 2025 11:30:42 +0000 Dragos Tatulea wrote:
> > > +bool __page_pool_rxq_wants_unreadable(struct net_device *dev, unsigned int qid);
> > > +
> > > +static inline bool
> > > +page_pool_rxq_wants_unreadable(const struct page_pool_params *pp_params)
> > > +{
> > > + return __page_pool_rxq_wants_unreadable(pp_params->netdev,
> > > + pp_params->queue_idx);
> > > +}
> > > +
> > Why not do this in the caller and have just a
> > page_pool_rxq_wants_unreadable() instead? It does make the code more
> > succint in the next patch but it looks weird as a generic function.
> > Subjective opinion though.
>
> Do you mean remove the version of the helper which takes pp_params?
> Yeah, dunno. I wrote the version that takes pp_params first.
> I wanted the helper to live next to page_pool_is_unreadable().
>
> If we remove the version that takes the pp_params, this helper makes
> more sense as an rxq helper, in netdev_queues.h / netdev_rx_queue.c :
>
> bool netif_rxq_has_unreadable_mp(dev, rxq_idx)
>
> right?
Yes, I think so. The memory provider name seems more precise as well.
Thanks,
Dragos
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH net-next 12/15] eth: fbnic: allocate unreadable page pool for the payloads
2025-08-20 2:56 [PATCH net-next 00/15] eth: fbnic: support queue API and zero-copy Rx Jakub Kicinski
` (10 preceding siblings ...)
2025-08-20 2:57 ` [PATCH net-next 11/15] net: page_pool: add helper to pre-check if PP will be unreadable Jakub Kicinski
@ 2025-08-20 2:57 ` Jakub Kicinski
2025-08-20 23:33 ` Mina Almasry
2025-08-20 2:57 ` [PATCH net-next 13/15] eth: fbnic: defer page pool recycling activation to queue start Jakub Kicinski
` (4 subsequent siblings)
16 siblings, 1 reply; 33+ messages in thread
From: Jakub Kicinski @ 2025-08-20 2:57 UTC (permalink / raw)
To: davem
Cc: netdev, edumazet, pabeni, andrew+netdev, horms, almasrymina,
michael.chan, tariqt, dtatulea, hawk, ilias.apalodimas,
alexanderduyck, sdf, Jakub Kicinski
Allow allocating a page pool with unreadable memory for the payload
ring (sub1). We need to provide the queue ID so that the memory provider
can match the PP, and use the appropriate page pool DMA sync helper.
While at it remove the define for page pool flags.
The rxq_idx is passed to fbnic_alloc_rx_qt_resources() explicitly
to make it easy to allocate page pools without NAPI (see the patch
after the next).
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
drivers/net/ethernet/meta/fbnic/fbnic_txrx.c | 31 +++++++++++++-------
1 file changed, 21 insertions(+), 10 deletions(-)
diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
index 7694b25ef77d..44d9f1598820 100644
--- a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
+++ b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
@@ -997,9 +997,8 @@ static void fbnic_add_rx_frag(struct fbnic_napi_vector *nv, u64 rcd,
FBNIC_BD_FRAG_SIZE;
/* Sync DMA buffer */
- dma_sync_single_range_for_cpu(nv->dev,
- page_pool_get_dma_addr_netmem(netmem),
- pg_off, truesize, DMA_BIDIRECTIONAL);
+ page_pool_dma_sync_netmem_for_cpu(qt->sub1.page_pool, netmem,
+ pg_off, truesize);
added = xdp_buff_add_frag(&pkt->buff, netmem, pg_off, len, truesize);
if (unlikely(!added)) {
@@ -1515,16 +1514,14 @@ void fbnic_free_napi_vectors(struct fbnic_net *fbn)
fbnic_free_napi_vector(fbn, fbn->napi[i]);
}
-#define FBNIC_PAGE_POOL_FLAGS \
- (PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV)
-
static int
fbnic_alloc_qt_page_pools(struct fbnic_net *fbn, struct fbnic_napi_vector *nv,
- struct fbnic_q_triad *qt)
+ struct fbnic_q_triad *qt, unsigned int rxq_idx)
{
struct page_pool_params pp_params = {
.order = 0,
- .flags = FBNIC_PAGE_POOL_FLAGS,
+ .flags = PP_FLAG_DMA_MAP |
+ PP_FLAG_DMA_SYNC_DEV,
.pool_size = fbn->hpq_size + fbn->ppq_size,
.nid = NUMA_NO_NODE,
.dev = nv->dev,
@@ -1533,6 +1530,7 @@ fbnic_alloc_qt_page_pools(struct fbnic_net *fbn, struct fbnic_napi_vector *nv,
.max_len = PAGE_SIZE,
.napi = &nv->napi,
.netdev = fbn->netdev,
+ .queue_idx = rxq_idx,
};
struct page_pool *pp;
@@ -1553,10 +1551,23 @@ fbnic_alloc_qt_page_pools(struct fbnic_net *fbn, struct fbnic_napi_vector *nv,
return PTR_ERR(pp);
qt->sub0.page_pool = pp;
- page_pool_get(pp);
+ if (page_pool_rxq_wants_unreadable(&pp_params)) {
+ pp_params.flags |= PP_FLAG_ALLOW_UNREADABLE_NETMEM;
+ pp_params.dma_dir = DMA_FROM_DEVICE;
+
+ pp = page_pool_create(&pp_params);
+ if (IS_ERR(pp))
+ goto err_destroy_sub0;
+ } else {
+ page_pool_get(pp);
+ }
qt->sub1.page_pool = pp;
return 0;
+
+err_destroy_sub0:
+ page_pool_destroy(pp);
+ return PTR_ERR(pp);
}
static void fbnic_ring_init(struct fbnic_ring *ring, u32 __iomem *doorbell,
@@ -1961,7 +1972,7 @@ static int fbnic_alloc_rx_qt_resources(struct fbnic_net *fbn,
struct device *dev = fbn->netdev->dev.parent;
int err;
- err = fbnic_alloc_qt_page_pools(fbn, nv, qt);
+ err = fbnic_alloc_qt_page_pools(fbn, nv, qt, qt->cmpl.q_idx);
if (err)
return err;
--
2.50.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* Re: [PATCH net-next 12/15] eth: fbnic: allocate unreadable page pool for the payloads
2025-08-20 2:57 ` [PATCH net-next 12/15] eth: fbnic: allocate unreadable page pool for the payloads Jakub Kicinski
@ 2025-08-20 23:33 ` Mina Almasry
2025-08-21 0:45 ` Jakub Kicinski
0 siblings, 1 reply; 33+ messages in thread
From: Mina Almasry @ 2025-08-20 23:33 UTC (permalink / raw)
To: Jakub Kicinski
Cc: davem, netdev, edumazet, pabeni, andrew+netdev, horms,
michael.chan, tariqt, dtatulea, hawk, ilias.apalodimas,
alexanderduyck, sdf
On Tue, Aug 19, 2025 at 7:57 PM Jakub Kicinski <kuba@kernel.org> wrote:
>
> Allow allocating a page pool with unreadable memory for the payload
> ring (sub1). We need to provide the queue ID so that the memory provider
> can match the PP, and use the appropriate page pool DMA sync helper.
> While at it remove the define for page pool flags.
>
> The rxq_idx is passed to fbnic_alloc_rx_qt_resources() explicitly
> to make it easy to allocate page pools without NAPI (see the patch
> after the next).
>
> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Reviewed-by: Mina Almasry <almasrymina@google.com>
> + if (page_pool_rxq_wants_unreadable(&pp_params)) {
> + pp_params.flags |= PP_FLAG_ALLOW_UNREADABLE_NETMEM;
> + pp_params.dma_dir = DMA_FROM_DEVICE;
> +
Although I'm not sure why the dma_dir needed to change specifically
for unreadable.
--
Thanks,
Mina
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH net-next 12/15] eth: fbnic: allocate unreadable page pool for the payloads
2025-08-20 23:33 ` Mina Almasry
@ 2025-08-21 0:45 ` Jakub Kicinski
0 siblings, 0 replies; 33+ messages in thread
From: Jakub Kicinski @ 2025-08-21 0:45 UTC (permalink / raw)
To: Mina Almasry
Cc: davem, netdev, edumazet, pabeni, andrew+netdev, horms,
michael.chan, tariqt, dtatulea, hawk, ilias.apalodimas,
alexanderduyck, sdf
On Wed, 20 Aug 2025 16:33:18 -0700 Mina Almasry wrote:
> > + if (page_pool_rxq_wants_unreadable(&pp_params)) {
> > + pp_params.flags |= PP_FLAG_ALLOW_UNREADABLE_NETMEM;
> > + pp_params.dma_dir = DMA_FROM_DEVICE;
> > +
>
> Although I'm not sure why the dma_dir needed to change specifically
> for unreadable.
Driver defaults to BIDIR AFAIU to avoid having to reset the
datapath when XDP is bound. But BIDIR is not compatible with
unreadable mem, and for good reason.
Will add to the commit msg.
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH net-next 13/15] eth: fbnic: defer page pool recycling activation to queue start
2025-08-20 2:56 [PATCH net-next 00/15] eth: fbnic: support queue API and zero-copy Rx Jakub Kicinski
` (11 preceding siblings ...)
2025-08-20 2:57 ` [PATCH net-next 12/15] eth: fbnic: allocate unreadable page pool for the payloads Jakub Kicinski
@ 2025-08-20 2:57 ` Jakub Kicinski
2025-08-20 2:57 ` [PATCH net-next 14/15] eth: fbnic: don't pass NAPI into pp alloc Jakub Kicinski
` (3 subsequent siblings)
16 siblings, 0 replies; 33+ messages in thread
From: Jakub Kicinski @ 2025-08-20 2:57 UTC (permalink / raw)
To: davem
Cc: netdev, edumazet, pabeni, andrew+netdev, horms, almasrymina,
michael.chan, tariqt, dtatulea, hawk, ilias.apalodimas,
alexanderduyck, sdf, Jakub Kicinski
We need to be more careful about when direct page pool recycling
is enabled in preparation for queue ops support. Don't set the
NAPI pointer, call page_pool_enable_direct_recycling() from
the function that activates the queue (once the config can
no longer fail).
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
drivers/net/ethernet/meta/fbnic/fbnic_txrx.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
index 44d9f1598820..958793be21a1 100644
--- a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
+++ b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
@@ -1528,7 +1528,6 @@ fbnic_alloc_qt_page_pools(struct fbnic_net *fbn, struct fbnic_napi_vector *nv,
.dma_dir = DMA_BIDIRECTIONAL,
.offset = 0,
.max_len = PAGE_SIZE,
- .napi = &nv->napi,
.netdev = fbn->netdev,
.queue_idx = rxq_idx,
};
@@ -2615,6 +2614,11 @@ static void __fbnic_nv_enable(struct fbnic_napi_vector *nv)
for (j = 0; j < nv->rxt_count; j++, t++) {
struct fbnic_q_triad *qt = &nv->qt[t];
+ page_pool_enable_direct_recycling(qt->sub0.page_pool,
+ &nv->napi);
+ page_pool_enable_direct_recycling(qt->sub1.page_pool,
+ &nv->napi);
+
fbnic_enable_bdq(&qt->sub0, &qt->sub1);
fbnic_config_drop_mode_rcq(nv, &qt->cmpl);
fbnic_enable_rcq(nv, &qt->cmpl);
--
2.50.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH net-next 14/15] eth: fbnic: don't pass NAPI into pp alloc
2025-08-20 2:56 [PATCH net-next 00/15] eth: fbnic: support queue API and zero-copy Rx Jakub Kicinski
` (12 preceding siblings ...)
2025-08-20 2:57 ` [PATCH net-next 13/15] eth: fbnic: defer page pool recycling activation to queue start Jakub Kicinski
@ 2025-08-20 2:57 ` Jakub Kicinski
2025-08-20 2:57 ` [PATCH net-next 15/15] eth: fbnic: support queue ops / zero-copy Rx Jakub Kicinski
` (2 subsequent siblings)
16 siblings, 0 replies; 33+ messages in thread
From: Jakub Kicinski @ 2025-08-20 2:57 UTC (permalink / raw)
To: davem
Cc: netdev, edumazet, pabeni, andrew+netdev, horms, almasrymina,
michael.chan, tariqt, dtatulea, hawk, ilias.apalodimas,
alexanderduyck, sdf, Jakub Kicinski
Queue API may ask us to allocate page pools when the device
is down, to validate that we ingested a memory provider binding.
Don't require NAPI to be passed to fbnic_alloc_qt_page_pools(),
to make calling fbnic_alloc_qt_page_pools() without NAPI possible.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
drivers/net/ethernet/meta/fbnic/fbnic_txrx.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
index 958793be21a1..980c8e991c0c 100644
--- a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
+++ b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
@@ -1515,8 +1515,8 @@ void fbnic_free_napi_vectors(struct fbnic_net *fbn)
}
static int
-fbnic_alloc_qt_page_pools(struct fbnic_net *fbn, struct fbnic_napi_vector *nv,
- struct fbnic_q_triad *qt, unsigned int rxq_idx)
+fbnic_alloc_qt_page_pools(struct fbnic_net *fbn, struct fbnic_q_triad *qt,
+ unsigned int rxq_idx)
{
struct page_pool_params pp_params = {
.order = 0,
@@ -1524,7 +1524,7 @@ fbnic_alloc_qt_page_pools(struct fbnic_net *fbn, struct fbnic_napi_vector *nv,
PP_FLAG_DMA_SYNC_DEV,
.pool_size = fbn->hpq_size + fbn->ppq_size,
.nid = NUMA_NO_NODE,
- .dev = nv->dev,
+ .dev = fbn->netdev->dev.parent,
.dma_dir = DMA_BIDIRECTIONAL,
.offset = 0,
.max_len = PAGE_SIZE,
@@ -1971,7 +1971,7 @@ static int fbnic_alloc_rx_qt_resources(struct fbnic_net *fbn,
struct device *dev = fbn->netdev->dev.parent;
int err;
- err = fbnic_alloc_qt_page_pools(fbn, nv, qt, qt->cmpl.q_idx);
+ err = fbnic_alloc_qt_page_pools(fbn, qt, qt->cmpl.q_idx);
if (err)
return err;
--
2.50.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH net-next 15/15] eth: fbnic: support queue ops / zero-copy Rx
2025-08-20 2:56 [PATCH net-next 00/15] eth: fbnic: support queue API and zero-copy Rx Jakub Kicinski
` (13 preceding siblings ...)
2025-08-20 2:57 ` [PATCH net-next 14/15] eth: fbnic: don't pass NAPI into pp alloc Jakub Kicinski
@ 2025-08-20 2:57 ` Jakub Kicinski
2025-08-21 7:51 ` [PATCH net-next 00/15] eth: fbnic: support queue API and " Paolo Abeni
2025-08-21 15:20 ` patchwork-bot+netdevbpf
16 siblings, 0 replies; 33+ messages in thread
From: Jakub Kicinski @ 2025-08-20 2:57 UTC (permalink / raw)
To: davem
Cc: netdev, edumazet, pabeni, andrew+netdev, horms, almasrymina,
michael.chan, tariqt, dtatulea, hawk, ilias.apalodimas,
alexanderduyck, sdf, Jakub Kicinski
Support queue ops. fbnic doesn't shut down the entire device
just to restart a single queue.
./tools/testing/selftests/drivers/net/hw/iou-zcrx.py
TAP version 13
1..3
ok 1 iou-zcrx.test_zcrx
ok 2 iou-zcrx.test_zcrx_oneshot
ok 3 iou-zcrx.test_zcrx_rss
# Totals: pass:3 fail:0 xfail:0 xpass:0 skip:0 error:0
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
drivers/net/ethernet/meta/fbnic/fbnic_txrx.h | 2 +
.../net/ethernet/meta/fbnic/fbnic_netdev.c | 3 +-
drivers/net/ethernet/meta/fbnic/fbnic_txrx.c | 171 ++++++++++++++++++
3 files changed, 174 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.h b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.h
index 58ae7f9c8f54..31fac0ba0902 100644
--- a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.h
+++ b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.h
@@ -156,6 +156,8 @@ struct fbnic_napi_vector {
struct fbnic_q_triad qt[];
};
+extern const struct netdev_queue_mgmt_ops fbnic_queue_mgmt_ops;
+
netdev_tx_t fbnic_xmit_frame(struct sk_buff *skb, struct net_device *dev);
netdev_features_t
fbnic_features_check(struct sk_buff *skb, struct net_device *dev,
diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_netdev.c b/drivers/net/ethernet/meta/fbnic/fbnic_netdev.c
index 37c900ce8257..abdcf88bc957 100644
--- a/drivers/net/ethernet/meta/fbnic/fbnic_netdev.c
+++ b/drivers/net/ethernet/meta/fbnic/fbnic_netdev.c
@@ -747,11 +747,10 @@ struct net_device *fbnic_netdev_alloc(struct fbnic_dev *fbd)
netdev->netdev_ops = &fbnic_netdev_ops;
netdev->stat_ops = &fbnic_stat_ops;
+ netdev->queue_mgmt_ops = &fbnic_queue_mgmt_ops;
fbnic_set_ethtool_ops(netdev);
- netdev->request_ops_lock = true;
-
fbn = netdev_priv(netdev);
fbn->netdev = netdev;
diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
index 980c8e991c0c..e891ae8b4d58 100644
--- a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
+++ b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
@@ -2212,6 +2212,13 @@ static void __fbnic_nv_disable(struct fbnic_napi_vector *nv)
}
}
+static void
+fbnic_nv_disable(struct fbnic_net *fbn, struct fbnic_napi_vector *nv)
+{
+ __fbnic_nv_disable(nv);
+ fbnic_wrfl(fbn->fbd);
+}
+
void fbnic_disable(struct fbnic_net *fbn)
{
struct fbnic_dev *fbd = fbn->fbd;
@@ -2307,6 +2314,44 @@ int fbnic_wait_all_queues_idle(struct fbnic_dev *fbd, bool may_fail)
return err;
}
+static int
+fbnic_wait_queue_idle(struct fbnic_net *fbn, bool rx, unsigned int idx)
+{
+ static const unsigned int tx_regs[] = {
+ FBNIC_QM_TWQ_IDLE(0), FBNIC_QM_TQS_IDLE(0),
+ FBNIC_QM_TDE_IDLE(0), FBNIC_QM_TCQ_IDLE(0),
+ }, rx_regs[] = {
+ FBNIC_QM_HPQ_IDLE(0), FBNIC_QM_PPQ_IDLE(0),
+ FBNIC_QM_RCQ_IDLE(0),
+ };
+ struct fbnic_dev *fbd = fbn->fbd;
+ unsigned int val, mask, off;
+ const unsigned int *regs;
+ unsigned int reg_cnt;
+ int i, err;
+
+ regs = rx ? rx_regs : tx_regs;
+ reg_cnt = rx ? ARRAY_SIZE(rx_regs) : ARRAY_SIZE(tx_regs);
+
+ off = idx / 32;
+ mask = BIT(idx % 32);
+
+ for (i = 0; i < reg_cnt; i++) {
+ err = read_poll_timeout_atomic(fbnic_rd32, val, val & mask,
+ 2, 500000, false,
+ fbd, regs[i] + off);
+ if (err) {
+ netdev_err(fbd->netdev,
+ "wait for queue %s%d idle failed 0x%04x(%d): %08x (mask: %08x)\n",
+ rx ? "Rx" : "Tx", idx, regs[i] + off, i,
+ val, mask);
+ return err;
+ }
+ }
+
+ return 0;
+}
+
static void fbnic_nv_flush(struct fbnic_napi_vector *nv)
{
int j, t;
@@ -2625,6 +2670,12 @@ static void __fbnic_nv_enable(struct fbnic_napi_vector *nv)
}
}
+static void fbnic_nv_enable(struct fbnic_net *fbn, struct fbnic_napi_vector *nv)
+{
+ __fbnic_nv_enable(nv);
+ fbnic_wrfl(fbn->fbd);
+}
+
void fbnic_enable(struct fbnic_net *fbn)
{
struct fbnic_dev *fbd = fbn->fbd;
@@ -2703,3 +2754,123 @@ void fbnic_napi_depletion_check(struct net_device *netdev)
fbnic_wrfl(fbd);
}
+
+static int fbnic_queue_mem_alloc(struct net_device *dev, void *qmem, int idx)
+{
+ struct fbnic_net *fbn = netdev_priv(dev);
+ const struct fbnic_q_triad *real;
+ struct fbnic_q_triad *qt = qmem;
+ struct fbnic_napi_vector *nv;
+
+ if (!netif_running(dev))
+ return fbnic_alloc_qt_page_pools(fbn, qt, idx);
+
+ real = container_of(fbn->rx[idx], struct fbnic_q_triad, cmpl);
+ nv = fbn->napi[idx % fbn->num_napi];
+
+ fbnic_ring_init(&qt->sub0, real->sub0.doorbell, real->sub0.q_idx,
+ real->sub0.flags);
+ fbnic_ring_init(&qt->sub1, real->sub1.doorbell, real->sub1.q_idx,
+ real->sub1.flags);
+ fbnic_ring_init(&qt->cmpl, real->cmpl.doorbell, real->cmpl.q_idx,
+ real->cmpl.flags);
+
+ return fbnic_alloc_rx_qt_resources(fbn, nv, qt);
+}
+
+static void fbnic_queue_mem_free(struct net_device *dev, void *qmem)
+{
+ struct fbnic_net *fbn = netdev_priv(dev);
+ struct fbnic_q_triad *qt = qmem;
+
+ if (!netif_running(dev))
+ fbnic_free_qt_page_pools(qt);
+ else
+ fbnic_free_qt_resources(fbn, qt);
+}
+
+static void __fbnic_nv_restart(struct fbnic_net *fbn,
+ struct fbnic_napi_vector *nv)
+{
+ struct fbnic_dev *fbd = fbn->fbd;
+ int i;
+
+ fbnic_nv_enable(fbn, nv);
+ fbnic_nv_fill(nv);
+
+ napi_enable_locked(&nv->napi);
+ fbnic_nv_irq_enable(nv);
+ fbnic_wr32(fbd, FBNIC_INTR_SET(nv->v_idx / 32), BIT(nv->v_idx % 32));
+ fbnic_wrfl(fbd);
+
+ for (i = 0; i < nv->txt_count; i++)
+ netif_wake_subqueue(fbn->netdev, nv->qt[i].sub0.q_idx);
+}
+
+static int fbnic_queue_start(struct net_device *dev, void *qmem, int idx)
+{
+ struct fbnic_net *fbn = netdev_priv(dev);
+ struct fbnic_napi_vector *nv;
+ struct fbnic_q_triad *real;
+
+ real = container_of(fbn->rx[idx], struct fbnic_q_triad, cmpl);
+ nv = fbn->napi[idx % fbn->num_napi];
+
+ fbnic_aggregate_ring_rx_counters(fbn, &real->sub0);
+ fbnic_aggregate_ring_rx_counters(fbn, &real->sub1);
+ fbnic_aggregate_ring_rx_counters(fbn, &real->cmpl);
+
+ memcpy(real, qmem, sizeof(*real));
+
+ __fbnic_nv_restart(fbn, nv);
+
+ return 0;
+}
+
+static int fbnic_queue_stop(struct net_device *dev, void *qmem, int idx)
+{
+ struct fbnic_net *fbn = netdev_priv(dev);
+ const struct fbnic_q_triad *real;
+ struct fbnic_napi_vector *nv;
+ int i, t;
+ int err;
+
+ real = container_of(fbn->rx[idx], struct fbnic_q_triad, cmpl);
+ nv = fbn->napi[idx % fbn->num_napi];
+
+ napi_disable_locked(&nv->napi);
+ fbnic_nv_irq_disable(nv);
+
+ for (i = 0; i < nv->txt_count; i++)
+ netif_stop_subqueue(dev, nv->qt[i].sub0.q_idx);
+ fbnic_nv_disable(fbn, nv);
+
+ for (t = 0; t < nv->txt_count + nv->rxt_count; t++) {
+ err = fbnic_wait_queue_idle(fbn, t >= nv->txt_count,
+ nv->qt[t].sub0.q_idx);
+ if (err)
+ goto err_restart;
+ }
+
+ fbnic_synchronize_irq(fbn->fbd, nv->v_idx);
+ fbnic_nv_flush(nv);
+
+ page_pool_disable_direct_recycling(real->sub0.page_pool);
+ page_pool_disable_direct_recycling(real->sub1.page_pool);
+
+ memcpy(qmem, real, sizeof(*real));
+
+ return 0;
+
+err_restart:
+ __fbnic_nv_restart(fbn, nv);
+ return err;
+}
+
+const struct netdev_queue_mgmt_ops fbnic_queue_mgmt_ops = {
+ .ndo_queue_mem_size = sizeof(struct fbnic_q_triad),
+ .ndo_queue_mem_alloc = fbnic_queue_mem_alloc,
+ .ndo_queue_mem_free = fbnic_queue_mem_free,
+ .ndo_queue_start = fbnic_queue_start,
+ .ndo_queue_stop = fbnic_queue_stop,
+};
--
2.50.1
^ permalink raw reply related [flat|nested] 33+ messages in thread
* Re: [PATCH net-next 00/15] eth: fbnic: support queue API and zero-copy Rx
2025-08-20 2:56 [PATCH net-next 00/15] eth: fbnic: support queue API and zero-copy Rx Jakub Kicinski
` (14 preceding siblings ...)
2025-08-20 2:57 ` [PATCH net-next 15/15] eth: fbnic: support queue ops / zero-copy Rx Jakub Kicinski
@ 2025-08-21 7:51 ` Paolo Abeni
2025-08-21 14:28 ` Jakub Kicinski
2025-08-21 15:20 ` patchwork-bot+netdevbpf
16 siblings, 1 reply; 33+ messages in thread
From: Paolo Abeni @ 2025-08-21 7:51 UTC (permalink / raw)
To: Jakub Kicinski
Cc: netdev, edumazet, andrew+netdev, horms, almasrymina, michael.chan,
tariqt, dtatulea, hawk, ilias.apalodimas, alexanderduyck, sdf,
davem
On 8/20/25 4:56 AM, Jakub Kicinski wrote:
> Add support for queue API to fbnic, enable zero-copy Rx.
>
> The first patch adds page_pool_get(), I alluded to this
> new helper when dicussing commit 64fdaa94bfe0 ("net: page_pool:
> allow enabling recycling late, fix false positive warning").
> For page pool-oriented reviewers another patch of interest
> is patch 11, which adds a helper to test whether rxq wants
> to create a unreadable page pool. mlx5 already has this
> sort of a check, we said we will add a helper when more
> drivers need it (IIRC), so I guess now is the time.
>
> Patches 2-4 reshuffle the Rx init/allocation path to better
> align structures and functions which operate on them. Notably
> patch 2 moves the page pool pointer to the queue struct (from
> NAPI).
>
> Patch 5 converts the driver to use netmem_ref. The driver has
> separate and explicit buffer queue for scatter / payloads,
> so only references to those are converted.
>
> Next 5 patches are more boring code shifts.
>
> Patch 12 adds unreadable memory support to page pool allocation.
>
> Patch 15 finally adds the support for queue API.
>
> $ ./tools/testing/selftests/drivers/net/hw/iou-zcrx.py
> TAP version 13
> 1..3
> ok 1 iou-zcrx.test_zcrx
> ok 2 iou-zcrx.test_zcrx_oneshot
> ok 3 iou-zcrx.test_zcrx_rss
> # Totals: pass:3 fail:0 xfail:0 xpass:0 skip:0 error:0
Blindly noting that this series is apparently causing a few H/W
selftests failures, even if i.e. this one:
# ok 2 ping.test_default_v6
# # Exception| Traceback (most recent call last):
# # Exception| File
"/home/virtme/testing/wt-24/tools/testing/selftests/net/lib/py/ksft.py",
line 244, in ksft_run
# # Exception| case(*args)
# # Exception| File
"/home/virtme/testing/wt-24/tools/testing/selftests/drivers/net/./ping.py",
line 173, in test_xdp_generic_sb
# # Exception| _set_xdp_generic_sb_on(cfg)
# # Exception| File
"/home/virtme/testing/wt-24/tools/testing/selftests/drivers/net/./ping.py",
line 72, in _set_xdp_generic_sb_on
# # Exception| cmd(f"ip link set dev {cfg.ifname} mtu 1500
xdpgeneric obj {prog} sec xdp", shell=True)
# # Exception| File
"/home/virtme/testing/wt-24/tools/testing/selftests/net/lib/py/utils.py",
line 71, in __init__
# # Exception| self.process(terminate=False, fail=fail, timeout=timeout)
# # Exception| File
"/home/virtme/testing/wt-24/tools/testing/selftests/net/lib/py/utils.py",
line 91, in process
# # Exception| raise CmdExitFailure("Command failed: %s\nSTDOUT:
%s\nSTDERR: %s" %
# # Exception| net.lib.py.utils.CmdExitFailure: Command failed: ip link
set dev enp1s0 mtu 1500 xdpgeneric obj
/home/virtme/testing/wt-24/tools/testing/selftests/net/lib/xdp_dummy.bpf.o
sec xdp
# # Exception| STDOUT: b''
# # Exception| STDERR: b'Error: unable to install XDP to device using
tcp-data-split.\n'
# not ok 3 ping.test_xdp_generic_sb
looks more related to commit 2b30fc01a6c788ed4a799ed8a6f42ed9ac82417f
(but ping did not failed back than)
/P
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH net-next 00/15] eth: fbnic: support queue API and zero-copy Rx
2025-08-21 7:51 ` [PATCH net-next 00/15] eth: fbnic: support queue API and " Paolo Abeni
@ 2025-08-21 14:28 ` Jakub Kicinski
2025-08-21 14:53 ` Taehee Yoo
2025-08-21 15:02 ` Paolo Abeni
0 siblings, 2 replies; 33+ messages in thread
From: Jakub Kicinski @ 2025-08-21 14:28 UTC (permalink / raw)
To: Paolo Abeni, Taehee Yoo
Cc: netdev, edumazet, andrew+netdev, horms, almasrymina, michael.chan,
tariqt, dtatulea, hawk, ilias.apalodimas, alexanderduyck, sdf,
davem
On Thu, 21 Aug 2025 09:51:55 +0200 Paolo Abeni wrote:
> Blindly noting
I haven't looked closely either :) but my gut feeling is that this
is because the devmem test doesn't clean up after itself. It used to
bail out sooner, with this series is goes further in messing up the
config, and then all tests that run after have a misconfigured NIC..
Taehee, are you planning to work on addressing that? I'm happy to fix
it up myself, if you aren't already half way done with the changes
yourself, or planning to do it soon..
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH net-next 00/15] eth: fbnic: support queue API and zero-copy Rx
2025-08-21 14:28 ` Jakub Kicinski
@ 2025-08-21 14:53 ` Taehee Yoo
2025-08-21 15:03 ` Jakub Kicinski
2025-08-21 15:02 ` Paolo Abeni
1 sibling, 1 reply; 33+ messages in thread
From: Taehee Yoo @ 2025-08-21 14:53 UTC (permalink / raw)
To: Jakub Kicinski
Cc: Paolo Abeni, netdev, edumazet, andrew+netdev, horms, almasrymina,
michael.chan, tariqt, dtatulea, hawk, ilias.apalodimas,
alexanderduyck, sdf, davem
On Thu, Aug 21, 2025 at 11:28 PM Jakub Kicinski <kuba@kernel.org> wrote:
>
> On Thu, 21 Aug 2025 09:51:55 +0200 Paolo Abeni wrote:
> > Blindly noting
>
Hi Jakub and Paolo,
> I haven't looked closely either :) but my gut feeling is that this
> is because the devmem test doesn't clean up after itself. It used to
> bail out sooner, with this series is goes further in messing up the
> config, and then all tests that run after have a misconfigured NIC..
>
> Taehee, are you planning to work on addressing that? I'm happy to fix
> it up myself, if you aren't already half way done with the changes
> yourself, or planning to do it soon..
Apologies for the delayed action.
I would appreciate it if you could address this issue.
Thank you so much!
Taehee Yoo
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH net-next 00/15] eth: fbnic: support queue API and zero-copy Rx
2025-08-21 14:53 ` Taehee Yoo
@ 2025-08-21 15:03 ` Jakub Kicinski
2025-08-21 15:22 ` Mina Almasry
0 siblings, 1 reply; 33+ messages in thread
From: Jakub Kicinski @ 2025-08-21 15:03 UTC (permalink / raw)
To: Taehee Yoo
Cc: Paolo Abeni, netdev, edumazet, andrew+netdev, horms, almasrymina,
michael.chan, tariqt, dtatulea, hawk, ilias.apalodimas,
alexanderduyck, sdf, davem
On Thu, 21 Aug 2025 23:53:37 +0900 Taehee Yoo wrote:
> > I haven't looked closely either :) but my gut feeling is that this
> > is because the devmem test doesn't clean up after itself. It used to
> > bail out sooner, with this series is goes further in messing up the
> > config, and then all tests that run after have a misconfigured NIC..
> >
> > Taehee, are you planning to work on addressing that? I'm happy to fix
> > it up myself, if you aren't already half way done with the changes
> > yourself, or planning to do it soon..
>
> Apologies for the delayed action.
> I would appreciate it if you could address this issue.
Will do, thanks!
Let me apply the first patch of this series, and the rest has to wait
until I fix the devmem test, I guess.
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH net-next 00/15] eth: fbnic: support queue API and zero-copy Rx
2025-08-21 15:03 ` Jakub Kicinski
@ 2025-08-21 15:22 ` Mina Almasry
2025-08-21 15:42 ` Jakub Kicinski
0 siblings, 1 reply; 33+ messages in thread
From: Mina Almasry @ 2025-08-21 15:22 UTC (permalink / raw)
To: Jakub Kicinski
Cc: Taehee Yoo, Paolo Abeni, netdev, edumazet, andrew+netdev, horms,
michael.chan, tariqt, dtatulea, hawk, ilias.apalodimas,
alexanderduyck, sdf, davem
On Thu, Aug 21, 2025 at 8:03 AM Jakub Kicinski <kuba@kernel.org> wrote:
>
> On Thu, 21 Aug 2025 23:53:37 +0900 Taehee Yoo wrote:
> > > I haven't looked closely either :) but my gut feeling is that this
> > > is because the devmem test doesn't clean up after itself. It used to
> > > bail out sooner, with this series is goes further in messing up the
> > > config, and then all tests that run after have a misconfigured NIC..
> > >
> > > Taehee, are you planning to work on addressing that? I'm happy to fix
> > > it up myself, if you aren't already half way done with the changes
> > > yourself, or planning to do it soon..
> >
> > Apologies for the delayed action.
> > I would appreciate it if you could address this issue.
>
> Will do, thanks!
>
> Let me apply the first patch of this series, and the rest has to wait
> until I fix the devmem test, I guess.
I'll take a look.
...although I happen to be running into a random machine capacity
issue at the moment. I hope to resolve that sometime this week and
look into this.
--
Thanks,
Mina
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH net-next 00/15] eth: fbnic: support queue API and zero-copy Rx
2025-08-21 15:22 ` Mina Almasry
@ 2025-08-21 15:42 ` Jakub Kicinski
0 siblings, 0 replies; 33+ messages in thread
From: Jakub Kicinski @ 2025-08-21 15:42 UTC (permalink / raw)
To: Mina Almasry
Cc: Taehee Yoo, Paolo Abeni, netdev, edumazet, andrew+netdev, horms,
michael.chan, tariqt, dtatulea, hawk, ilias.apalodimas,
alexanderduyck, sdf, davem
On Thu, 21 Aug 2025 08:22:27 -0700 Mina Almasry wrote:
> > > Apologies for the delayed action.
> > > I would appreciate it if you could address this issue.
> >
> > Will do, thanks!
> >
> > Let me apply the first patch of this series, and the rest has to wait
> > until I fix the devmem test, I guess.
>
> I'll take a look.
>
> ...although I happen to be running into a random machine capacity
> issue at the moment. I hope to resolve that sometime this week and
> look into this.
Hm, would it be useful for you to have access to fbnic-capable QEMU?
Having a reasonably advanced driver on QEMU is a major productivity
boost for me. But I suppose you already have a SW backend for GVE,
so it's more of a development flow thing? IOW you don't just run stuff
on your laptop anyway?
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH net-next 00/15] eth: fbnic: support queue API and zero-copy Rx
2025-08-21 14:28 ` Jakub Kicinski
2025-08-21 14:53 ` Taehee Yoo
@ 2025-08-21 15:02 ` Paolo Abeni
1 sibling, 0 replies; 33+ messages in thread
From: Paolo Abeni @ 2025-08-21 15:02 UTC (permalink / raw)
To: Jakub Kicinski, Taehee Yoo
Cc: netdev, edumazet, andrew+netdev, horms, almasrymina, michael.chan,
tariqt, dtatulea, hawk, ilias.apalodimas, alexanderduyck, sdf,
davem
On 8/21/25 4:28 PM, Jakub Kicinski wrote:
> On Thu, 21 Aug 2025 09:51:55 +0200 Paolo Abeni wrote:
>> Blindly noting
>
> I haven't looked closely either :) but my gut feeling is that this
> is because the devmem test doesn't clean up after itself. It used to
> bail out sooner, with this series is goes further in messing up the
> config, and then all tests that run after have a misconfigured NIC..
Possibly I was not clear in my previous email: I suspect that the
ping.py issue is a real one/tied to driver changes - even if already
merged ones and not to the pending patches.
/P
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH net-next 00/15] eth: fbnic: support queue API and zero-copy Rx
2025-08-20 2:56 [PATCH net-next 00/15] eth: fbnic: support queue API and zero-copy Rx Jakub Kicinski
` (15 preceding siblings ...)
2025-08-21 7:51 ` [PATCH net-next 00/15] eth: fbnic: support queue API and " Paolo Abeni
@ 2025-08-21 15:20 ` patchwork-bot+netdevbpf
16 siblings, 0 replies; 33+ messages in thread
From: patchwork-bot+netdevbpf @ 2025-08-21 15:20 UTC (permalink / raw)
To: Jakub Kicinski
Cc: davem, netdev, edumazet, pabeni, andrew+netdev, horms,
almasrymina, michael.chan, tariqt, dtatulea, hawk,
ilias.apalodimas, alexanderduyck, sdf
Hello:
This series was applied to netdev/net-next.git (main)
by Jakub Kicinski <kuba@kernel.org>:
On Tue, 19 Aug 2025 19:56:49 -0700 you wrote:
> Add support for queue API to fbnic, enable zero-copy Rx.
>
> The first patch adds page_pool_get(), I alluded to this
> new helper when dicussing commit 64fdaa94bfe0 ("net: page_pool:
> allow enabling recycling late, fix false positive warning").
> For page pool-oriented reviewers another patch of interest
> is patch 11, which adds a helper to test whether rxq wants
> to create a unreadable page pool. mlx5 already has this
> sort of a check, we said we will add a helper when more
> drivers need it (IIRC), so I guess now is the time.
>
> [...]
Here is the summary with links:
- [net-next,01/15] net: page_pool: add page_pool_get()
https://git.kernel.org/netdev/net-next/c/07cf71bf25cd
- [net-next,02/15] eth: fbnic: move page pool pointer from NAPI to the ring struct
(no matching commit)
- [net-next,03/15] eth: fbnic: move xdp_rxq_info_reg() to resource alloc
(no matching commit)
- [net-next,04/15] eth: fbnic: move page pool alloc to fbnic_alloc_rx_qt_resources()
(no matching commit)
- [net-next,05/15] eth: fbnic: use netmem_ref where applicable
(no matching commit)
- [net-next,06/15] eth: fbnic: request ops lock
(no matching commit)
- [net-next,07/15] eth: fbnic: split fbnic_disable()
(no matching commit)
- [net-next,08/15] eth: fbnic: split fbnic_flush()
(no matching commit)
- [net-next,09/15] eth: fbnic: split fbnic_enable()
(no matching commit)
- [net-next,10/15] eth: fbnic: split fbnic_fill()
(no matching commit)
- [net-next,11/15] net: page_pool: add helper to pre-check if PP will be unreadable
(no matching commit)
- [net-next,12/15] eth: fbnic: allocate unreadable page pool for the payloads
(no matching commit)
- [net-next,13/15] eth: fbnic: defer page pool recycling activation to queue start
(no matching commit)
- [net-next,14/15] eth: fbnic: don't pass NAPI into pp alloc
(no matching commit)
- [net-next,15/15] eth: fbnic: support queue ops / zero-copy Rx
(no matching commit)
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 33+ messages in thread