* [PATCH net-next v2 00/14] eth: fbnic: support queue API and zero-copy Rx
@ 2025-08-29 1:22 Jakub Kicinski
2025-08-29 1:22 ` [PATCH net-next v2 01/14] eth: fbnic: move page pool pointer from NAPI to the ring struct Jakub Kicinski
` (13 more replies)
0 siblings, 14 replies; 18+ messages in thread
From: Jakub Kicinski @ 2025-08-29 1:22 UTC (permalink / raw)
To: davem
Cc: netdev, edumazet, pabeni, andrew+netdev, horms, almasrymina,
tariqt, dtatulea, hawk, ilias.apalodimas, alexanderduyck, sdf,
Jakub Kicinski
Add support for queue API to fbnic, enable zero-copy Rx.
Patch 10 is likely of most interest as it adds a new core helper
(and touches mlx5). The rest of the patches are fbnic-specific
(and relatively boring).
Patches 1-3 reshuffle the Rx init/allocation path to better
align structures and functions which operate on them. Notably
patch 1 moves the page pool pointer to the queue struct (from NAPI).
Patch 4 converts the driver to use netmem_ref. The driver has
separate and explicit buffer queue for scatter / payloads, so only
references to those are converted.
Next 5 patches are more boring code shifts.
Patch 11 adds unreadable memory support to page pool allocation.
Patch 14 finally adds the support for queue API.
v2:
- rework patch 10
- update commit message in patch 11
v1: https://lore.kernel.org/20250820025704.166248-1-kuba@kernel.org
Jakub Kicinski (14):
eth: fbnic: move page pool pointer from NAPI to the ring struct
eth: fbnic: move xdp_rxq_info_reg() to resource alloc
eth: fbnic: move page pool alloc to fbnic_alloc_rx_qt_resources()
eth: fbnic: use netmem_ref where applicable
eth: fbnic: request ops lock
eth: fbnic: split fbnic_disable()
eth: fbnic: split fbnic_flush()
eth: fbnic: split fbnic_enable()
eth: fbnic: split fbnic_fill()
net: add helper to pre-check if PP for an Rx queue will be unreadable
eth: fbnic: allocate unreadable page pool for the payloads
eth: fbnic: defer page pool recycling activation to queue start
eth: fbnic: don't pass NAPI into pp alloc
eth: fbnic: support queue ops / zero-copy Rx
drivers/net/ethernet/meta/fbnic/fbnic_txrx.h | 20 +-
include/net/netdev_queues.h | 2 +
include/net/page_pool/helpers.h | 12 +
.../net/ethernet/mellanox/mlx5/core/en_main.c | 9 +-
.../net/ethernet/meta/fbnic/fbnic_netdev.c | 1 +
drivers/net/ethernet/meta/fbnic/fbnic_pci.c | 9 +-
drivers/net/ethernet/meta/fbnic/fbnic_txrx.c | 621 ++++++++++++------
net/core/netdev_rx_queue.c | 9 +
8 files changed, 454 insertions(+), 229 deletions(-)
--
2.51.0
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH net-next v2 01/14] eth: fbnic: move page pool pointer from NAPI to the ring struct
2025-08-29 1:22 [PATCH net-next v2 00/14] eth: fbnic: support queue API and zero-copy Rx Jakub Kicinski
@ 2025-08-29 1:22 ` Jakub Kicinski
2025-08-29 1:22 ` [PATCH net-next v2 02/14] eth: fbnic: move xdp_rxq_info_reg() to resource alloc Jakub Kicinski
` (12 subsequent siblings)
13 siblings, 0 replies; 18+ messages in thread
From: Jakub Kicinski @ 2025-08-29 1:22 UTC (permalink / raw)
To: davem
Cc: netdev, edumazet, pabeni, andrew+netdev, horms, almasrymina,
tariqt, dtatulea, hawk, ilias.apalodimas, alexanderduyck, sdf,
Jakub Kicinski
In preparation for memory providers we need a closer association
between queues and page pools. We used to have a page pool at the
NAPI level to serve all associated queues but with MP the queues
under a NAPI may no longer be created equal.
The "ring" structure in fbnic is a descriptor ring. We have separate
"rings" for payload and header pages ("to device"), as well as a ring
for completions ("from device"). Technically we only need the page
pool pointers in the "to device" rings, so adding the pointer to
the ring struct is a bit wasteful. But it makes passing the structures
around much easier.
For now both "to device" rings store a pointer to the same
page pool. Using more than one queue per NAPI is extremely rare
so don't bother trying to share a single page pool between queues.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
drivers/net/ethernet/meta/fbnic/fbnic_txrx.h | 16 ++--
drivers/net/ethernet/meta/fbnic/fbnic_txrx.c | 83 +++++++++++---------
2 files changed, 55 insertions(+), 44 deletions(-)
diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.h b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.h
index 873440ca6a31..a935a1acfb3e 100644
--- a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.h
+++ b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.h
@@ -121,11 +121,16 @@ struct fbnic_ring {
u32 head, tail; /* Head/Tail of ring */
- /* Deferred_head is used to cache the head for TWQ1 if an attempt
- * is made to clean TWQ1 with zero napi_budget. We do not use it for
- * any other ring.
- */
- s32 deferred_head;
+ union {
+ /* Rx BDQs only */
+ struct page_pool *page_pool;
+
+ /* Deferred_head is used to cache the head for TWQ1 if
+ * an attempt is made to clean TWQ1 with zero napi_budget.
+ * We do not use it for any other ring.
+ */
+ s32 deferred_head;
+ };
struct fbnic_queue_stats stats;
@@ -142,7 +147,6 @@ struct fbnic_q_triad {
struct fbnic_napi_vector {
struct napi_struct napi;
struct device *dev; /* Device for DMA unmapping */
- struct page_pool *page_pool;
struct fbnic_dev *fbd;
u16 v_idx;
diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
index fea4577e38d4..7f8bdb08db9f 100644
--- a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
+++ b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
@@ -640,7 +640,7 @@ static void fbnic_clean_twq1(struct fbnic_napi_vector *nv, bool pp_allow_direct,
FBNIC_TWD_TYPE_AL;
total_bytes += FIELD_GET(FBNIC_TWD_LEN_MASK, twd);
- page_pool_put_page(nv->page_pool, page, -1, pp_allow_direct);
+ page_pool_put_page(page->pp, page, -1, pp_allow_direct);
next_desc:
head++;
head &= ring->size_mask;
@@ -735,13 +735,13 @@ static struct page *fbnic_page_pool_get(struct fbnic_ring *ring,
}
static void fbnic_page_pool_drain(struct fbnic_ring *ring, unsigned int idx,
- struct fbnic_napi_vector *nv, int budget)
+ int budget)
{
struct fbnic_rx_buf *rx_buf = &ring->rx_buf[idx];
struct page *page = rx_buf->page;
if (!page_pool_unref_page(page, rx_buf->pagecnt_bias))
- page_pool_put_unrefed_page(nv->page_pool, page, -1, !!budget);
+ page_pool_put_unrefed_page(ring->page_pool, page, -1, !!budget);
rx_buf->page = NULL;
}
@@ -826,8 +826,8 @@ fbnic_clean_tcq(struct fbnic_napi_vector *nv, struct fbnic_q_triad *qt,
fbnic_clean_twq(nv, napi_budget, qt, ts_head, head0, head1);
}
-static void fbnic_clean_bdq(struct fbnic_napi_vector *nv, int napi_budget,
- struct fbnic_ring *ring, unsigned int hw_head)
+static void fbnic_clean_bdq(struct fbnic_ring *ring, unsigned int hw_head,
+ int napi_budget)
{
unsigned int head = ring->head;
@@ -835,7 +835,7 @@ static void fbnic_clean_bdq(struct fbnic_napi_vector *nv, int napi_budget,
return;
do {
- fbnic_page_pool_drain(ring, head, nv, napi_budget);
+ fbnic_page_pool_drain(ring, head, napi_budget);
head++;
head &= ring->size_mask;
@@ -865,7 +865,7 @@ static void fbnic_bd_prep(struct fbnic_ring *bdq, u16 id, struct page *page)
} while (--i);
}
-static void fbnic_fill_bdq(struct fbnic_napi_vector *nv, struct fbnic_ring *bdq)
+static void fbnic_fill_bdq(struct fbnic_ring *bdq)
{
unsigned int count = fbnic_desc_unused(bdq);
unsigned int i = bdq->tail;
@@ -876,7 +876,7 @@ static void fbnic_fill_bdq(struct fbnic_napi_vector *nv, struct fbnic_ring *bdq)
do {
struct page *page;
- page = page_pool_dev_alloc_pages(nv->page_pool);
+ page = page_pool_dev_alloc_pages(bdq->page_pool);
if (!page) {
u64_stats_update_begin(&bdq->stats.syncp);
bdq->stats.rx.alloc_failed++;
@@ -997,7 +997,7 @@ static void fbnic_add_rx_frag(struct fbnic_napi_vector *nv, u64 rcd,
}
}
-static void fbnic_put_pkt_buff(struct fbnic_napi_vector *nv,
+static void fbnic_put_pkt_buff(struct fbnic_q_triad *qt,
struct fbnic_pkt_buff *pkt, int budget)
{
struct page *page;
@@ -1014,12 +1014,13 @@ static void fbnic_put_pkt_buff(struct fbnic_napi_vector *nv,
while (nr_frags--) {
page = skb_frag_page(&shinfo->frags[nr_frags]);
- page_pool_put_full_page(nv->page_pool, page, !!budget);
+ page_pool_put_full_page(qt->sub1.page_pool, page,
+ !!budget);
}
}
page = virt_to_page(pkt->buff.data_hard_start);
- page_pool_put_full_page(nv->page_pool, page, !!budget);
+ page_pool_put_full_page(qt->sub0.page_pool, page, !!budget);
}
static struct sk_buff *fbnic_build_skb(struct fbnic_napi_vector *nv,
@@ -1274,7 +1275,7 @@ static int fbnic_clean_rcq(struct fbnic_napi_vector *nv,
dropped++;
}
- fbnic_put_pkt_buff(nv, pkt, 1);
+ fbnic_put_pkt_buff(qt, pkt, 1);
}
pkt->buff.data_hard_start = NULL;
@@ -1307,12 +1308,12 @@ static int fbnic_clean_rcq(struct fbnic_napi_vector *nv,
/* Unmap and free processed buffers */
if (head0 >= 0)
- fbnic_clean_bdq(nv, budget, &qt->sub0, head0);
- fbnic_fill_bdq(nv, &qt->sub0);
+ fbnic_clean_bdq(&qt->sub0, head0, budget);
+ fbnic_fill_bdq(&qt->sub0);
if (head1 >= 0)
- fbnic_clean_bdq(nv, budget, &qt->sub1, head1);
- fbnic_fill_bdq(nv, &qt->sub1);
+ fbnic_clean_bdq(&qt->sub1, head1, budget);
+ fbnic_fill_bdq(&qt->sub1);
/* Record the current head/tail of the queue */
if (rcq->head != head) {
@@ -1462,6 +1463,12 @@ static void fbnic_remove_rx_ring(struct fbnic_net *fbn,
fbn->rx[rxr->q_idx] = NULL;
}
+static void fbnic_free_qt_page_pools(struct fbnic_q_triad *qt)
+{
+ page_pool_destroy(qt->sub0.page_pool);
+ page_pool_destroy(qt->sub1.page_pool);
+}
+
static void fbnic_free_napi_vector(struct fbnic_net *fbn,
struct fbnic_napi_vector *nv)
{
@@ -1479,10 +1486,10 @@ static void fbnic_free_napi_vector(struct fbnic_net *fbn,
fbnic_remove_rx_ring(fbn, &nv->qt[i].sub0);
fbnic_remove_rx_ring(fbn, &nv->qt[i].sub1);
fbnic_remove_rx_ring(fbn, &nv->qt[i].cmpl);
+ fbnic_free_qt_page_pools(&nv->qt[i]);
}
fbnic_napi_free_irq(fbd, nv);
- page_pool_destroy(nv->page_pool);
netif_napi_del(&nv->napi);
fbn->napi[fbnic_napi_idx(nv)] = NULL;
kfree(nv);
@@ -1500,13 +1507,14 @@ void fbnic_free_napi_vectors(struct fbnic_net *fbn)
#define FBNIC_PAGE_POOL_FLAGS \
(PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV)
-static int fbnic_alloc_nv_page_pool(struct fbnic_net *fbn,
- struct fbnic_napi_vector *nv)
+static int
+fbnic_alloc_qt_page_pools(struct fbnic_net *fbn, struct fbnic_napi_vector *nv,
+ struct fbnic_q_triad *qt)
{
struct page_pool_params pp_params = {
.order = 0,
.flags = FBNIC_PAGE_POOL_FLAGS,
- .pool_size = (fbn->hpq_size + fbn->ppq_size) * nv->rxt_count,
+ .pool_size = fbn->hpq_size + fbn->ppq_size,
.nid = NUMA_NO_NODE,
.dev = nv->dev,
.dma_dir = DMA_BIDIRECTIONAL,
@@ -1533,7 +1541,9 @@ static int fbnic_alloc_nv_page_pool(struct fbnic_net *fbn,
if (IS_ERR(pp))
return PTR_ERR(pp);
- nv->page_pool = pp;
+ qt->sub0.page_pool = pp;
+ page_pool_get(pp);
+ qt->sub1.page_pool = pp;
return 0;
}
@@ -1599,17 +1609,10 @@ static int fbnic_alloc_napi_vector(struct fbnic_dev *fbd, struct fbnic_net *fbn,
/* Tie nv back to PCIe dev */
nv->dev = fbd->dev;
- /* Allocate page pool */
- if (rxq_count) {
- err = fbnic_alloc_nv_page_pool(fbn, nv);
- if (err)
- goto napi_del;
- }
-
/* Request the IRQ for napi vector */
err = fbnic_napi_request_irq(fbd, nv);
if (err)
- goto pp_destroy;
+ goto napi_del;
/* Initialize queue triads */
qt = nv->qt;
@@ -1679,10 +1682,14 @@ static int fbnic_alloc_napi_vector(struct fbnic_dev *fbd, struct fbnic_net *fbn,
fbnic_ring_init(&qt->cmpl, db, rxq_idx, FBNIC_RING_F_STATS);
fbn->rx[rxq_idx] = &qt->cmpl;
+ err = fbnic_alloc_qt_page_pools(fbn, nv, qt);
+ if (err)
+ goto free_ring_cur_qt;
+
err = xdp_rxq_info_reg(&qt->xdp_rxq, fbn->netdev, rxq_idx,
nv->napi.napi_id);
if (err)
- goto free_ring_cur_qt;
+ goto free_qt_pp;
/* Update Rx queue index */
rxt_count--;
@@ -1698,6 +1705,8 @@ static int fbnic_alloc_napi_vector(struct fbnic_dev *fbd, struct fbnic_net *fbn,
qt--;
xdp_rxq_info_unreg(&qt->xdp_rxq);
+free_qt_pp:
+ fbnic_free_qt_page_pools(qt);
free_ring_cur_qt:
fbnic_remove_rx_ring(fbn, &qt->sub0);
fbnic_remove_rx_ring(fbn, &qt->sub1);
@@ -1714,8 +1723,6 @@ static int fbnic_alloc_napi_vector(struct fbnic_dev *fbd, struct fbnic_net *fbn,
txt_count++;
}
fbnic_napi_free_irq(fbd, nv);
-pp_destroy:
- page_pool_destroy(nv->page_pool);
napi_del:
netif_napi_del(&nv->napi);
fbn->napi[fbnic_napi_idx(nv)] = NULL;
@@ -2019,7 +2026,7 @@ static int fbnic_alloc_nv_resources(struct fbnic_net *fbn,
/* Register XDP memory model for completion queue */
err = xdp_reg_mem_model(&nv->qt[i].xdp_rxq.mem,
MEM_TYPE_PAGE_POOL,
- nv->page_pool);
+ nv->qt[i].sub0.page_pool);
if (err)
goto xdp_unreg_mem_model;
@@ -2333,13 +2340,13 @@ void fbnic_flush(struct fbnic_net *fbn)
struct fbnic_q_triad *qt = &nv->qt[t];
/* Clean the work queues of unprocessed work */
- fbnic_clean_bdq(nv, 0, &qt->sub0, qt->sub0.tail);
- fbnic_clean_bdq(nv, 0, &qt->sub1, qt->sub1.tail);
+ fbnic_clean_bdq(&qt->sub0, qt->sub0.tail, 0);
+ fbnic_clean_bdq(&qt->sub1, qt->sub1.tail, 0);
/* Reset completion queue descriptor ring */
memset(qt->cmpl.desc, 0, qt->cmpl.size);
- fbnic_put_pkt_buff(nv, qt->cmpl.pkt, 0);
+ fbnic_put_pkt_buff(qt, qt->cmpl.pkt, 0);
memset(qt->cmpl.pkt, 0, sizeof(struct fbnic_pkt_buff));
}
}
@@ -2360,8 +2367,8 @@ void fbnic_fill(struct fbnic_net *fbn)
struct fbnic_q_triad *qt = &nv->qt[t];
/* Populate the header and payload BDQs */
- fbnic_fill_bdq(nv, &qt->sub0);
- fbnic_fill_bdq(nv, &qt->sub1);
+ fbnic_fill_bdq(&qt->sub0);
+ fbnic_fill_bdq(&qt->sub1);
}
}
}
--
2.51.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH net-next v2 02/14] eth: fbnic: move xdp_rxq_info_reg() to resource alloc
2025-08-29 1:22 [PATCH net-next v2 00/14] eth: fbnic: support queue API and zero-copy Rx Jakub Kicinski
2025-08-29 1:22 ` [PATCH net-next v2 01/14] eth: fbnic: move page pool pointer from NAPI to the ring struct Jakub Kicinski
@ 2025-08-29 1:22 ` Jakub Kicinski
2025-08-29 1:22 ` [PATCH net-next v2 03/14] eth: fbnic: move page pool alloc to fbnic_alloc_rx_qt_resources() Jakub Kicinski
` (11 subsequent siblings)
13 siblings, 0 replies; 18+ messages in thread
From: Jakub Kicinski @ 2025-08-29 1:22 UTC (permalink / raw)
To: davem
Cc: netdev, edumazet, pabeni, andrew+netdev, horms, almasrymina,
tariqt, dtatulea, hawk, ilias.apalodimas, alexanderduyck, sdf,
Jakub Kicinski
Move rxq_info and mem model registration from fbnic_alloc_napi_vector()
and fbnic_alloc_nv_resources() to fbnic_alloc_rx_qt_resources().
The rxq_info is now registered later in the process, but that
should not cause any issues.
rxq_info lives in the fbnic_q_triad (qt) struct so qt init is a more
natural place. Encapsulating the logic in the qt functions will also
allow simplifying the cleanup in the NAPI related alloc functions
in the next commit.
Rx does not have a dedicated fbnic_free_rx_qt_resources(),
but we can use xdp_rxq_info_is_reg() to tell whether given
rxq_info was in use (effectively - if it's a qt for an Rx queue).
Having to pass nv into fbnic_alloc_rx_qt_resources() is not
great in terms of layering, but that's temporary, pp will
move soon..
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
drivers/net/ethernet/meta/fbnic/fbnic_txrx.c | 58 +++++++++-----------
1 file changed, 26 insertions(+), 32 deletions(-)
diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
index 7f8bdb08db9f..29a780f72c14 100644
--- a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
+++ b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
@@ -1482,7 +1482,6 @@ static void fbnic_free_napi_vector(struct fbnic_net *fbn,
}
for (j = 0; j < nv->rxt_count; j++, i++) {
- xdp_rxq_info_unreg(&nv->qt[i].xdp_rxq);
fbnic_remove_rx_ring(fbn, &nv->qt[i].sub0);
fbnic_remove_rx_ring(fbn, &nv->qt[i].sub1);
fbnic_remove_rx_ring(fbn, &nv->qt[i].cmpl);
@@ -1686,11 +1685,6 @@ static int fbnic_alloc_napi_vector(struct fbnic_dev *fbd, struct fbnic_net *fbn,
if (err)
goto free_ring_cur_qt;
- err = xdp_rxq_info_reg(&qt->xdp_rxq, fbn->netdev, rxq_idx,
- nv->napi.napi_id);
- if (err)
- goto free_qt_pp;
-
/* Update Rx queue index */
rxt_count--;
rxq_idx += v_count;
@@ -1704,8 +1698,6 @@ static int fbnic_alloc_napi_vector(struct fbnic_dev *fbd, struct fbnic_net *fbn,
while (rxt_count < nv->rxt_count) {
qt--;
- xdp_rxq_info_unreg(&qt->xdp_rxq);
-free_qt_pp:
fbnic_free_qt_page_pools(qt);
free_ring_cur_qt:
fbnic_remove_rx_ring(fbn, &qt->sub0);
@@ -1938,6 +1930,11 @@ static void fbnic_free_qt_resources(struct fbnic_net *fbn,
fbnic_free_ring_resources(dev, &qt->cmpl);
fbnic_free_ring_resources(dev, &qt->sub1);
fbnic_free_ring_resources(dev, &qt->sub0);
+
+ if (xdp_rxq_info_is_reg(&qt->xdp_rxq)) {
+ xdp_rxq_info_unreg_mem_model(&qt->xdp_rxq);
+ xdp_rxq_info_unreg(&qt->xdp_rxq);
+ }
}
static int fbnic_alloc_tx_qt_resources(struct fbnic_net *fbn,
@@ -1968,15 +1965,27 @@ static int fbnic_alloc_tx_qt_resources(struct fbnic_net *fbn,
}
static int fbnic_alloc_rx_qt_resources(struct fbnic_net *fbn,
+ struct fbnic_napi_vector *nv,
struct fbnic_q_triad *qt)
{
struct device *dev = fbn->netdev->dev.parent;
int err;
- err = fbnic_alloc_rx_ring_resources(fbn, &qt->sub0);
+ err = xdp_rxq_info_reg(&qt->xdp_rxq, fbn->netdev, qt->sub0.q_idx,
+ nv->napi.napi_id);
if (err)
return err;
+ /* Register XDP memory model for completion queue */
+ err = xdp_rxq_info_reg_mem_model(&qt->xdp_rxq, MEM_TYPE_PAGE_POOL,
+ qt->sub0.page_pool);
+ if (err)
+ goto unreg_rxq;
+
+ err = fbnic_alloc_rx_ring_resources(fbn, &qt->sub0);
+ if (err)
+ goto unreg_mm;
+
err = fbnic_alloc_rx_ring_resources(fbn, &qt->sub1);
if (err)
goto free_sub0;
@@ -1991,22 +2000,20 @@ static int fbnic_alloc_rx_qt_resources(struct fbnic_net *fbn,
fbnic_free_ring_resources(dev, &qt->sub1);
free_sub0:
fbnic_free_ring_resources(dev, &qt->sub0);
+unreg_mm:
+ xdp_rxq_info_unreg_mem_model(&qt->xdp_rxq);
+unreg_rxq:
+ xdp_rxq_info_unreg(&qt->xdp_rxq);
return err;
}
static void fbnic_free_nv_resources(struct fbnic_net *fbn,
struct fbnic_napi_vector *nv)
{
- int i, j;
+ int i;
- /* Free Tx Resources */
- for (i = 0; i < nv->txt_count; i++)
+ for (i = 0; i < nv->txt_count + nv->rxt_count; i++)
fbnic_free_qt_resources(fbn, &nv->qt[i]);
-
- for (j = 0; j < nv->rxt_count; j++, i++) {
- fbnic_free_qt_resources(fbn, &nv->qt[i]);
- xdp_rxq_info_unreg_mem_model(&nv->qt[i].xdp_rxq);
- }
}
static int fbnic_alloc_nv_resources(struct fbnic_net *fbn,
@@ -2023,26 +2030,13 @@ static int fbnic_alloc_nv_resources(struct fbnic_net *fbn,
/* Allocate Rx Resources */
for (j = 0; j < nv->rxt_count; j++, i++) {
- /* Register XDP memory model for completion queue */
- err = xdp_reg_mem_model(&nv->qt[i].xdp_rxq.mem,
- MEM_TYPE_PAGE_POOL,
- nv->qt[i].sub0.page_pool);
+ err = fbnic_alloc_rx_qt_resources(fbn, nv, &nv->qt[i]);
if (err)
- goto xdp_unreg_mem_model;
-
- err = fbnic_alloc_rx_qt_resources(fbn, &nv->qt[i]);
- if (err)
- goto xdp_unreg_cur_model;
+ goto free_qt_resources;
}
return 0;
-xdp_unreg_mem_model:
- while (j-- && i--) {
- fbnic_free_qt_resources(fbn, &nv->qt[i]);
-xdp_unreg_cur_model:
- xdp_rxq_info_unreg_mem_model(&nv->qt[i].xdp_rxq);
- }
free_qt_resources:
while (i--)
fbnic_free_qt_resources(fbn, &nv->qt[i]);
--
2.51.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH net-next v2 03/14] eth: fbnic: move page pool alloc to fbnic_alloc_rx_qt_resources()
2025-08-29 1:22 [PATCH net-next v2 00/14] eth: fbnic: support queue API and zero-copy Rx Jakub Kicinski
2025-08-29 1:22 ` [PATCH net-next v2 01/14] eth: fbnic: move page pool pointer from NAPI to the ring struct Jakub Kicinski
2025-08-29 1:22 ` [PATCH net-next v2 02/14] eth: fbnic: move xdp_rxq_info_reg() to resource alloc Jakub Kicinski
@ 2025-08-29 1:22 ` Jakub Kicinski
2025-08-29 1:22 ` [PATCH net-next v2 04/14] eth: fbnic: use netmem_ref where applicable Jakub Kicinski
` (10 subsequent siblings)
13 siblings, 0 replies; 18+ messages in thread
From: Jakub Kicinski @ 2025-08-29 1:22 UTC (permalink / raw)
To: davem
Cc: netdev, edumazet, pabeni, andrew+netdev, horms, almasrymina,
tariqt, dtatulea, hawk, ilias.apalodimas, alexanderduyck, sdf,
Jakub Kicinski
page pools are now at the ring level, move page pool alloc
to fbnic_alloc_rx_qt_resources(), and freeing to
fbnic_free_qt_resources().
This significantly simplifies fbnic_alloc_napi_vector() error
handling, by removing a late failure point.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
drivers/net/ethernet/meta/fbnic/fbnic_txrx.c | 37 +++++---------------
1 file changed, 9 insertions(+), 28 deletions(-)
diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
index 29a780f72c14..15ebbaa0bed2 100644
--- a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
+++ b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
@@ -1485,7 +1485,6 @@ static void fbnic_free_napi_vector(struct fbnic_net *fbn,
fbnic_remove_rx_ring(fbn, &nv->qt[i].sub0);
fbnic_remove_rx_ring(fbn, &nv->qt[i].sub1);
fbnic_remove_rx_ring(fbn, &nv->qt[i].cmpl);
- fbnic_free_qt_page_pools(&nv->qt[i]);
}
fbnic_napi_free_irq(fbd, nv);
@@ -1681,10 +1680,6 @@ static int fbnic_alloc_napi_vector(struct fbnic_dev *fbd, struct fbnic_net *fbn,
fbnic_ring_init(&qt->cmpl, db, rxq_idx, FBNIC_RING_F_STATS);
fbn->rx[rxq_idx] = &qt->cmpl;
- err = fbnic_alloc_qt_page_pools(fbn, nv, qt);
- if (err)
- goto free_ring_cur_qt;
-
/* Update Rx queue index */
rxt_count--;
rxq_idx += v_count;
@@ -1695,26 +1690,6 @@ static int fbnic_alloc_napi_vector(struct fbnic_dev *fbd, struct fbnic_net *fbn,
return 0;
- while (rxt_count < nv->rxt_count) {
- qt--;
-
- fbnic_free_qt_page_pools(qt);
-free_ring_cur_qt:
- fbnic_remove_rx_ring(fbn, &qt->sub0);
- fbnic_remove_rx_ring(fbn, &qt->sub1);
- fbnic_remove_rx_ring(fbn, &qt->cmpl);
- rxt_count++;
- }
- while (txt_count < nv->txt_count) {
- qt--;
-
- fbnic_remove_tx_ring(fbn, &qt->sub0);
- fbnic_remove_xdp_ring(fbn, &qt->sub1);
- fbnic_remove_tx_ring(fbn, &qt->cmpl);
-
- txt_count++;
- }
- fbnic_napi_free_irq(fbd, nv);
napi_del:
netif_napi_del(&nv->napi);
fbn->napi[fbnic_napi_idx(nv)] = NULL;
@@ -1934,6 +1909,7 @@ static void fbnic_free_qt_resources(struct fbnic_net *fbn,
if (xdp_rxq_info_is_reg(&qt->xdp_rxq)) {
xdp_rxq_info_unreg_mem_model(&qt->xdp_rxq);
xdp_rxq_info_unreg(&qt->xdp_rxq);
+ fbnic_free_qt_page_pools(qt);
}
}
@@ -1971,12 +1947,15 @@ static int fbnic_alloc_rx_qt_resources(struct fbnic_net *fbn,
struct device *dev = fbn->netdev->dev.parent;
int err;
- err = xdp_rxq_info_reg(&qt->xdp_rxq, fbn->netdev, qt->sub0.q_idx,
- nv->napi.napi_id);
+ err = fbnic_alloc_qt_page_pools(fbn, nv, qt);
if (err)
return err;
- /* Register XDP memory model for completion queue */
+ err = xdp_rxq_info_reg(&qt->xdp_rxq, fbn->netdev, qt->sub0.q_idx,
+ nv->napi.napi_id);
+ if (err)
+ goto free_page_pools;
+
err = xdp_rxq_info_reg_mem_model(&qt->xdp_rxq, MEM_TYPE_PAGE_POOL,
qt->sub0.page_pool);
if (err)
@@ -2004,6 +1983,8 @@ static int fbnic_alloc_rx_qt_resources(struct fbnic_net *fbn,
xdp_rxq_info_unreg_mem_model(&qt->xdp_rxq);
unreg_rxq:
xdp_rxq_info_unreg(&qt->xdp_rxq);
+free_page_pools:
+ fbnic_free_qt_page_pools(qt);
return err;
}
--
2.51.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH net-next v2 04/14] eth: fbnic: use netmem_ref where applicable
2025-08-29 1:22 [PATCH net-next v2 00/14] eth: fbnic: support queue API and zero-copy Rx Jakub Kicinski
` (2 preceding siblings ...)
2025-08-29 1:22 ` [PATCH net-next v2 03/14] eth: fbnic: move page pool alloc to fbnic_alloc_rx_qt_resources() Jakub Kicinski
@ 2025-08-29 1:22 ` Jakub Kicinski
2025-08-29 1:22 ` [PATCH net-next v2 05/14] eth: fbnic: request ops lock Jakub Kicinski
` (9 subsequent siblings)
13 siblings, 0 replies; 18+ messages in thread
From: Jakub Kicinski @ 2025-08-29 1:22 UTC (permalink / raw)
To: davem
Cc: netdev, edumazet, pabeni, andrew+netdev, horms, almasrymina,
tariqt, dtatulea, hawk, ilias.apalodimas, alexanderduyck, sdf,
Jakub Kicinski
Use netmem_ref instead of struct page pointer in prep for
unreadable memory. fbnic has separate free buffer submission
queues for headers and for data. Refactor the helper which
returns page pointer for a submission buffer to take the
high level queue container, create a separate handler
for header and payload rings. This ties the "upcast" from
netmem to system page to use of sub0 which we know has
system pages.
Reviewed-by: Mina Almasry <almasrymina@google.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
drivers/net/ethernet/meta/fbnic/fbnic_txrx.h | 2 +-
drivers/net/ethernet/meta/fbnic/fbnic_txrx.c | 65 ++++++++++++--------
2 files changed, 40 insertions(+), 27 deletions(-)
diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.h b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.h
index a935a1acfb3e..58ae7f9c8f54 100644
--- a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.h
+++ b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.h
@@ -100,7 +100,7 @@ struct fbnic_queue_stats {
#define FBNIC_PAGECNT_BIAS_MAX PAGE_SIZE
struct fbnic_rx_buf {
- struct page *page;
+ netmem_ref netmem;
long pagecnt_bias;
};
diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
index 15ebbaa0bed2..8dbe83bc2be1 100644
--- a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
+++ b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
@@ -715,35 +715,47 @@ static void fbnic_clean_tsq(struct fbnic_napi_vector *nv,
}
static void fbnic_page_pool_init(struct fbnic_ring *ring, unsigned int idx,
- struct page *page)
+ netmem_ref netmem)
{
struct fbnic_rx_buf *rx_buf = &ring->rx_buf[idx];
- page_pool_fragment_page(page, FBNIC_PAGECNT_BIAS_MAX);
+ page_pool_fragment_netmem(netmem, FBNIC_PAGECNT_BIAS_MAX);
rx_buf->pagecnt_bias = FBNIC_PAGECNT_BIAS_MAX;
- rx_buf->page = page;
+ rx_buf->netmem = netmem;
}
-static struct page *fbnic_page_pool_get(struct fbnic_ring *ring,
- unsigned int idx)
+static struct page *
+fbnic_page_pool_get_head(struct fbnic_q_triad *qt, unsigned int idx)
{
- struct fbnic_rx_buf *rx_buf = &ring->rx_buf[idx];
+ struct fbnic_rx_buf *rx_buf = &qt->sub0.rx_buf[idx];
rx_buf->pagecnt_bias--;
- return rx_buf->page;
+ /* sub0 is always fed system pages, from the NAPI-level page_pool */
+ return netmem_to_page(rx_buf->netmem);
+}
+
+static netmem_ref
+fbnic_page_pool_get_data(struct fbnic_q_triad *qt, unsigned int idx)
+{
+ struct fbnic_rx_buf *rx_buf = &qt->sub1.rx_buf[idx];
+
+ rx_buf->pagecnt_bias--;
+
+ return rx_buf->netmem;
}
static void fbnic_page_pool_drain(struct fbnic_ring *ring, unsigned int idx,
int budget)
{
struct fbnic_rx_buf *rx_buf = &ring->rx_buf[idx];
- struct page *page = rx_buf->page;
+ netmem_ref netmem = rx_buf->netmem;
- if (!page_pool_unref_page(page, rx_buf->pagecnt_bias))
- page_pool_put_unrefed_page(ring->page_pool, page, -1, !!budget);
+ if (!page_pool_unref_netmem(netmem, rx_buf->pagecnt_bias))
+ page_pool_put_unrefed_netmem(ring->page_pool, netmem, -1,
+ !!budget);
- rx_buf->page = NULL;
+ rx_buf->netmem = 0;
}
static void fbnic_clean_twq(struct fbnic_napi_vector *nv, int napi_budget,
@@ -844,10 +856,10 @@ static void fbnic_clean_bdq(struct fbnic_ring *ring, unsigned int hw_head,
ring->head = head;
}
-static void fbnic_bd_prep(struct fbnic_ring *bdq, u16 id, struct page *page)
+static void fbnic_bd_prep(struct fbnic_ring *bdq, u16 id, netmem_ref netmem)
{
__le64 *bdq_desc = &bdq->desc[id * FBNIC_BD_FRAG_COUNT];
- dma_addr_t dma = page_pool_get_dma_addr(page);
+ dma_addr_t dma = page_pool_get_dma_addr_netmem(netmem);
u64 bd, i = FBNIC_BD_FRAG_COUNT;
bd = (FBNIC_BD_PAGE_ADDR_MASK & dma) |
@@ -874,10 +886,10 @@ static void fbnic_fill_bdq(struct fbnic_ring *bdq)
return;
do {
- struct page *page;
+ netmem_ref netmem;
- page = page_pool_dev_alloc_pages(bdq->page_pool);
- if (!page) {
+ netmem = page_pool_dev_alloc_netmems(bdq->page_pool);
+ if (!netmem) {
u64_stats_update_begin(&bdq->stats.syncp);
bdq->stats.rx.alloc_failed++;
u64_stats_update_end(&bdq->stats.syncp);
@@ -885,8 +897,8 @@ static void fbnic_fill_bdq(struct fbnic_ring *bdq)
break;
}
- fbnic_page_pool_init(bdq, i, page);
- fbnic_bd_prep(bdq, i, page);
+ fbnic_page_pool_init(bdq, i, netmem);
+ fbnic_bd_prep(bdq, i, netmem);
i++;
i &= bdq->size_mask;
@@ -933,7 +945,7 @@ static void fbnic_pkt_prepare(struct fbnic_napi_vector *nv, u64 rcd,
{
unsigned int hdr_pg_idx = FIELD_GET(FBNIC_RCD_AL_BUFF_PAGE_MASK, rcd);
unsigned int hdr_pg_off = FIELD_GET(FBNIC_RCD_AL_BUFF_OFF_MASK, rcd);
- struct page *page = fbnic_page_pool_get(&qt->sub0, hdr_pg_idx);
+ struct page *page = fbnic_page_pool_get_head(qt, hdr_pg_idx);
unsigned int len = FIELD_GET(FBNIC_RCD_AL_BUFF_LEN_MASK, rcd);
unsigned int frame_sz, hdr_pg_start, hdr_pg_end, headroom;
unsigned char *hdr_start;
@@ -974,7 +986,7 @@ static void fbnic_add_rx_frag(struct fbnic_napi_vector *nv, u64 rcd,
unsigned int pg_idx = FIELD_GET(FBNIC_RCD_AL_BUFF_PAGE_MASK, rcd);
unsigned int pg_off = FIELD_GET(FBNIC_RCD_AL_BUFF_OFF_MASK, rcd);
unsigned int len = FIELD_GET(FBNIC_RCD_AL_BUFF_LEN_MASK, rcd);
- struct page *page = fbnic_page_pool_get(&qt->sub1, pg_idx);
+ netmem_ref netmem = fbnic_page_pool_get_data(qt, pg_idx);
unsigned int truesize;
bool added;
@@ -985,11 +997,11 @@ static void fbnic_add_rx_frag(struct fbnic_napi_vector *nv, u64 rcd,
FBNIC_BD_FRAG_SIZE;
/* Sync DMA buffer */
- dma_sync_single_range_for_cpu(nv->dev, page_pool_get_dma_addr(page),
+ dma_sync_single_range_for_cpu(nv->dev,
+ page_pool_get_dma_addr_netmem(netmem),
pg_off, truesize, DMA_BIDIRECTIONAL);
- added = xdp_buff_add_frag(&pkt->buff, page_to_netmem(page), pg_off, len,
- truesize);
+ added = xdp_buff_add_frag(&pkt->buff, netmem, pg_off, len, truesize);
if (unlikely(!added)) {
pkt->add_frag_failed = true;
netdev_err_once(nv->napi.dev,
@@ -1007,15 +1019,16 @@ static void fbnic_put_pkt_buff(struct fbnic_q_triad *qt,
if (xdp_buff_has_frags(&pkt->buff)) {
struct skb_shared_info *shinfo;
+ netmem_ref netmem;
int nr_frags;
shinfo = xdp_get_shared_info_from_buff(&pkt->buff);
nr_frags = shinfo->nr_frags;
while (nr_frags--) {
- page = skb_frag_page(&shinfo->frags[nr_frags]);
- page_pool_put_full_page(qt->sub1.page_pool, page,
- !!budget);
+ netmem = skb_frag_netmem(&shinfo->frags[nr_frags]);
+ page_pool_put_full_netmem(qt->sub1.page_pool, netmem,
+ !!budget);
}
}
--
2.51.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH net-next v2 05/14] eth: fbnic: request ops lock
2025-08-29 1:22 [PATCH net-next v2 00/14] eth: fbnic: support queue API and zero-copy Rx Jakub Kicinski
` (3 preceding siblings ...)
2025-08-29 1:22 ` [PATCH net-next v2 04/14] eth: fbnic: use netmem_ref where applicable Jakub Kicinski
@ 2025-08-29 1:22 ` Jakub Kicinski
2025-08-29 1:22 ` [PATCH net-next v2 06/14] eth: fbnic: split fbnic_disable() Jakub Kicinski
` (8 subsequent siblings)
13 siblings, 0 replies; 18+ messages in thread
From: Jakub Kicinski @ 2025-08-29 1:22 UTC (permalink / raw)
To: davem
Cc: netdev, edumazet, pabeni, andrew+netdev, horms, almasrymina,
tariqt, dtatulea, hawk, ilias.apalodimas, alexanderduyck, sdf,
Jakub Kicinski
We'll add queue ops soon so. queue ops will opt the driver into
extra locking. Request this locking explicitly already to make
future patches smaller and easier to review.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
drivers/net/ethernet/meta/fbnic/fbnic_netdev.c | 2 ++
drivers/net/ethernet/meta/fbnic/fbnic_pci.c | 9 ++++++++-
drivers/net/ethernet/meta/fbnic/fbnic_txrx.c | 15 ++++++++-------
3 files changed, 18 insertions(+), 8 deletions(-)
diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_netdev.c b/drivers/net/ethernet/meta/fbnic/fbnic_netdev.c
index 0cf1ea927cc0..1d9d175e8f8c 100644
--- a/drivers/net/ethernet/meta/fbnic/fbnic_netdev.c
+++ b/drivers/net/ethernet/meta/fbnic/fbnic_netdev.c
@@ -710,6 +710,8 @@ struct net_device *fbnic_netdev_alloc(struct fbnic_dev *fbd)
fbnic_set_ethtool_ops(netdev);
+ netdev->request_ops_lock = true;
+
fbn = netdev_priv(netdev);
fbn->netdev = netdev;
diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_pci.c b/drivers/net/ethernet/meta/fbnic/fbnic_pci.c
index b3c27c566f52..419a3335978f 100644
--- a/drivers/net/ethernet/meta/fbnic/fbnic_pci.c
+++ b/drivers/net/ethernet/meta/fbnic/fbnic_pci.c
@@ -208,8 +208,11 @@ static void fbnic_service_task(struct work_struct *work)
fbnic_bmc_rpc_check(fbd);
- if (netif_carrier_ok(fbd->netdev))
+ if (netif_carrier_ok(fbd->netdev)) {
+ netdev_lock(fbd->netdev);
fbnic_napi_depletion_check(fbd->netdev);
+ netdev_unlock(fbd->netdev);
+ }
if (netif_running(fbd->netdev))
schedule_delayed_work(&fbd->service_task, HZ);
@@ -393,12 +396,14 @@ static int fbnic_pm_suspend(struct device *dev)
goto null_uc_addr;
rtnl_lock();
+ netdev_lock(netdev);
netif_device_detach(netdev);
if (netif_running(netdev))
netdev->netdev_ops->ndo_stop(netdev);
+ netdev_unlock(netdev);
rtnl_unlock();
null_uc_addr:
@@ -464,6 +469,7 @@ static int __fbnic_pm_resume(struct device *dev)
fbnic_reset_queues(fbn, fbn->num_tx_queues, fbn->num_rx_queues);
rtnl_lock();
+ netdev_lock(netdev);
if (netif_running(netdev)) {
err = __fbnic_open(fbn);
@@ -471,6 +477,7 @@ static int __fbnic_pm_resume(struct device *dev)
goto err_free_mbx;
}
+ netdev_unlock(netdev);
rtnl_unlock();
return 0;
diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
index 8dbe83bc2be1..dc0735b20739 100644
--- a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
+++ b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
@@ -1501,7 +1501,7 @@ static void fbnic_free_napi_vector(struct fbnic_net *fbn,
}
fbnic_napi_free_irq(fbd, nv);
- netif_napi_del(&nv->napi);
+ netif_napi_del_locked(&nv->napi);
fbn->napi[fbnic_napi_idx(nv)] = NULL;
kfree(nv);
}
@@ -1611,11 +1611,12 @@ static int fbnic_alloc_napi_vector(struct fbnic_dev *fbd, struct fbnic_net *fbn,
/* Tie napi to netdev */
fbn->napi[fbnic_napi_idx(nv)] = nv;
- netif_napi_add(fbn->netdev, &nv->napi, fbnic_poll);
+ netif_napi_add_locked(fbn->netdev, &nv->napi, fbnic_poll);
/* Record IRQ to NAPI struct */
- netif_napi_set_irq(&nv->napi,
- pci_irq_vector(to_pci_dev(fbd->dev), nv->v_idx));
+ netif_napi_set_irq_locked(&nv->napi,
+ pci_irq_vector(to_pci_dev(fbd->dev),
+ nv->v_idx));
/* Tie nv back to PCIe dev */
nv->dev = fbd->dev;
@@ -1704,7 +1705,7 @@ static int fbnic_alloc_napi_vector(struct fbnic_dev *fbd, struct fbnic_net *fbn,
return 0;
napi_del:
- netif_napi_del(&nv->napi);
+ netif_napi_del_locked(&nv->napi);
fbn->napi[fbnic_napi_idx(nv)] = NULL;
kfree(nv);
return err;
@@ -2173,7 +2174,7 @@ void fbnic_napi_disable(struct fbnic_net *fbn)
int i;
for (i = 0; i < fbn->num_napi; i++) {
- napi_disable(&fbn->napi[i]->napi);
+ napi_disable_locked(&fbn->napi[i]->napi);
fbnic_nv_irq_disable(fbn->napi[i]);
}
@@ -2621,7 +2622,7 @@ void fbnic_napi_enable(struct fbnic_net *fbn)
for (i = 0; i < fbn->num_napi; i++) {
struct fbnic_napi_vector *nv = fbn->napi[i];
- napi_enable(&nv->napi);
+ napi_enable_locked(&nv->napi);
fbnic_nv_irq_enable(nv);
--
2.51.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH net-next v2 06/14] eth: fbnic: split fbnic_disable()
2025-08-29 1:22 [PATCH net-next v2 00/14] eth: fbnic: support queue API and zero-copy Rx Jakub Kicinski
` (4 preceding siblings ...)
2025-08-29 1:22 ` [PATCH net-next v2 05/14] eth: fbnic: request ops lock Jakub Kicinski
@ 2025-08-29 1:22 ` Jakub Kicinski
2025-08-29 1:22 ` [PATCH net-next v2 07/14] eth: fbnic: split fbnic_flush() Jakub Kicinski
` (7 subsequent siblings)
13 siblings, 0 replies; 18+ messages in thread
From: Jakub Kicinski @ 2025-08-29 1:22 UTC (permalink / raw)
To: davem
Cc: netdev, edumazet, pabeni, andrew+netdev, horms, almasrymina,
tariqt, dtatulea, hawk, ilias.apalodimas, alexanderduyck, sdf,
Jakub Kicinski
Factor out handling a single nv from fbnic_disable() to make
it reusable for queue ops. Use a __ prefix for the factored
out code. The real fbnic_nv_disable() which will include
fbnic_wrfl() will be added with the qops, to avoid unused
function warnings.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
drivers/net/ethernet/meta/fbnic/fbnic_txrx.c | 46 +++++++++++---------
1 file changed, 25 insertions(+), 21 deletions(-)
diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
index dc0735b20739..7d6bf35acfd4 100644
--- a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
+++ b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
@@ -2180,31 +2180,35 @@ void fbnic_napi_disable(struct fbnic_net *fbn)
}
}
+static void __fbnic_nv_disable(struct fbnic_napi_vector *nv)
+{
+ int i, t;
+
+ /* Disable Tx queue triads */
+ for (t = 0; t < nv->txt_count; t++) {
+ struct fbnic_q_triad *qt = &nv->qt[t];
+
+ fbnic_disable_twq0(&qt->sub0);
+ fbnic_disable_twq1(&qt->sub1);
+ fbnic_disable_tcq(&qt->cmpl);
+ }
+
+ /* Disable Rx queue triads */
+ for (i = 0; i < nv->rxt_count; i++, t++) {
+ struct fbnic_q_triad *qt = &nv->qt[t];
+
+ fbnic_disable_bdq(&qt->sub0, &qt->sub1);
+ fbnic_disable_rcq(&qt->cmpl);
+ }
+}
+
void fbnic_disable(struct fbnic_net *fbn)
{
struct fbnic_dev *fbd = fbn->fbd;
- int i, j, t;
+ int i;
- for (i = 0; i < fbn->num_napi; i++) {
- struct fbnic_napi_vector *nv = fbn->napi[i];
-
- /* Disable Tx queue triads */
- for (t = 0; t < nv->txt_count; t++) {
- struct fbnic_q_triad *qt = &nv->qt[t];
-
- fbnic_disable_twq0(&qt->sub0);
- fbnic_disable_twq1(&qt->sub1);
- fbnic_disable_tcq(&qt->cmpl);
- }
-
- /* Disable Rx queue triads */
- for (j = 0; j < nv->rxt_count; j++, t++) {
- struct fbnic_q_triad *qt = &nv->qt[t];
-
- fbnic_disable_bdq(&qt->sub0, &qt->sub1);
- fbnic_disable_rcq(&qt->cmpl);
- }
- }
+ for (i = 0; i < fbn->num_napi; i++)
+ __fbnic_nv_disable(fbn->napi[i]);
fbnic_wrfl(fbd);
}
--
2.51.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH net-next v2 07/14] eth: fbnic: split fbnic_flush()
2025-08-29 1:22 [PATCH net-next v2 00/14] eth: fbnic: support queue API and zero-copy Rx Jakub Kicinski
` (5 preceding siblings ...)
2025-08-29 1:22 ` [PATCH net-next v2 06/14] eth: fbnic: split fbnic_disable() Jakub Kicinski
@ 2025-08-29 1:22 ` Jakub Kicinski
2025-08-29 1:22 ` [PATCH net-next v2 08/14] eth: fbnic: split fbnic_enable() Jakub Kicinski
` (6 subsequent siblings)
13 siblings, 0 replies; 18+ messages in thread
From: Jakub Kicinski @ 2025-08-29 1:22 UTC (permalink / raw)
To: davem
Cc: netdev, edumazet, pabeni, andrew+netdev, horms, almasrymina,
tariqt, dtatulea, hawk, ilias.apalodimas, alexanderduyck, sdf,
Jakub Kicinski
Factor out handling a single nv from fbnic_flush() to make
it reusable for queue ops.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
drivers/net/ethernet/meta/fbnic/fbnic_txrx.c | 87 ++++++++++----------
1 file changed, 45 insertions(+), 42 deletions(-)
diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
index 7d6bf35acfd4..8384e73b4492 100644
--- a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
+++ b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
@@ -2297,52 +2297,55 @@ int fbnic_wait_all_queues_idle(struct fbnic_dev *fbd, bool may_fail)
return err;
}
+static void fbnic_nv_flush(struct fbnic_napi_vector *nv)
+{
+ int j, t;
+
+ /* Flush any processed Tx Queue Triads and drop the rest */
+ for (t = 0; t < nv->txt_count; t++) {
+ struct fbnic_q_triad *qt = &nv->qt[t];
+ struct netdev_queue *tx_queue;
+
+ /* Clean the work queues of unprocessed work */
+ fbnic_clean_twq0(nv, 0, &qt->sub0, true, qt->sub0.tail);
+ fbnic_clean_twq1(nv, false, &qt->sub1, true,
+ qt->sub1.tail);
+
+ /* Reset completion queue descriptor ring */
+ memset(qt->cmpl.desc, 0, qt->cmpl.size);
+
+ /* Nothing else to do if Tx queue is disabled */
+ if (qt->sub0.flags & FBNIC_RING_F_DISABLED)
+ continue;
+
+ /* Reset BQL associated with Tx queue */
+ tx_queue = netdev_get_tx_queue(nv->napi.dev,
+ qt->sub0.q_idx);
+ netdev_tx_reset_queue(tx_queue);
+ }
+
+ /* Flush any processed Rx Queue Triads and drop the rest */
+ for (j = 0; j < nv->rxt_count; j++, t++) {
+ struct fbnic_q_triad *qt = &nv->qt[t];
+
+ /* Clean the work queues of unprocessed work */
+ fbnic_clean_bdq(&qt->sub0, qt->sub0.tail, 0);
+ fbnic_clean_bdq(&qt->sub1, qt->sub1.tail, 0);
+
+ /* Reset completion queue descriptor ring */
+ memset(qt->cmpl.desc, 0, qt->cmpl.size);
+
+ fbnic_put_pkt_buff(qt, qt->cmpl.pkt, 0);
+ memset(qt->cmpl.pkt, 0, sizeof(struct fbnic_pkt_buff));
+ }
+}
+
void fbnic_flush(struct fbnic_net *fbn)
{
int i;
- for (i = 0; i < fbn->num_napi; i++) {
- struct fbnic_napi_vector *nv = fbn->napi[i];
- int j, t;
-
- /* Flush any processed Tx Queue Triads and drop the rest */
- for (t = 0; t < nv->txt_count; t++) {
- struct fbnic_q_triad *qt = &nv->qt[t];
- struct netdev_queue *tx_queue;
-
- /* Clean the work queues of unprocessed work */
- fbnic_clean_twq0(nv, 0, &qt->sub0, true, qt->sub0.tail);
- fbnic_clean_twq1(nv, false, &qt->sub1, true,
- qt->sub1.tail);
-
- /* Reset completion queue descriptor ring */
- memset(qt->cmpl.desc, 0, qt->cmpl.size);
-
- /* Nothing else to do if Tx queue is disabled */
- if (qt->sub0.flags & FBNIC_RING_F_DISABLED)
- continue;
-
- /* Reset BQL associated with Tx queue */
- tx_queue = netdev_get_tx_queue(nv->napi.dev,
- qt->sub0.q_idx);
- netdev_tx_reset_queue(tx_queue);
- }
-
- /* Flush any processed Rx Queue Triads and drop the rest */
- for (j = 0; j < nv->rxt_count; j++, t++) {
- struct fbnic_q_triad *qt = &nv->qt[t];
-
- /* Clean the work queues of unprocessed work */
- fbnic_clean_bdq(&qt->sub0, qt->sub0.tail, 0);
- fbnic_clean_bdq(&qt->sub1, qt->sub1.tail, 0);
-
- /* Reset completion queue descriptor ring */
- memset(qt->cmpl.desc, 0, qt->cmpl.size);
-
- fbnic_put_pkt_buff(qt, qt->cmpl.pkt, 0);
- memset(qt->cmpl.pkt, 0, sizeof(struct fbnic_pkt_buff));
- }
- }
+ for (i = 0; i < fbn->num_napi; i++)
+ fbnic_nv_flush(fbn->napi[i]);
}
void fbnic_fill(struct fbnic_net *fbn)
--
2.51.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH net-next v2 08/14] eth: fbnic: split fbnic_enable()
2025-08-29 1:22 [PATCH net-next v2 00/14] eth: fbnic: support queue API and zero-copy Rx Jakub Kicinski
` (6 preceding siblings ...)
2025-08-29 1:22 ` [PATCH net-next v2 07/14] eth: fbnic: split fbnic_flush() Jakub Kicinski
@ 2025-08-29 1:22 ` Jakub Kicinski
2025-08-29 1:22 ` [PATCH net-next v2 09/14] eth: fbnic: split fbnic_fill() Jakub Kicinski
` (5 subsequent siblings)
13 siblings, 0 replies; 18+ messages in thread
From: Jakub Kicinski @ 2025-08-29 1:22 UTC (permalink / raw)
To: davem
Cc: netdev, edumazet, pabeni, andrew+netdev, horms, almasrymina,
tariqt, dtatulea, hawk, ilias.apalodimas, alexanderduyck, sdf,
Jakub Kicinski
Factor out handling a single nv from fbnic_enable() to make
it reusable for queue ops. Use a __ prefix for the factored
out code. The real fbnic_nv_enable() which will include
fbnic_wrfl() will be added with the qops, to avoid unused
function warnings.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
drivers/net/ethernet/meta/fbnic/fbnic_txrx.c | 47 +++++++++++---------
1 file changed, 25 insertions(+), 22 deletions(-)
diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
index 8384e73b4492..38dd1afb7005 100644
--- a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
+++ b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
@@ -2584,33 +2584,36 @@ static void fbnic_enable_rcq(struct fbnic_napi_vector *nv,
fbnic_ring_wr32(rcq, FBNIC_QUEUE_RCQ_CTL, FBNIC_QUEUE_RCQ_CTL_ENABLE);
}
+static void __fbnic_nv_enable(struct fbnic_napi_vector *nv)
+{
+ int j, t;
+
+ /* Setup Tx Queue Triads */
+ for (t = 0; t < nv->txt_count; t++) {
+ struct fbnic_q_triad *qt = &nv->qt[t];
+
+ fbnic_enable_twq0(&qt->sub0);
+ fbnic_enable_twq1(&qt->sub1);
+ fbnic_enable_tcq(nv, &qt->cmpl);
+ }
+
+ /* Setup Rx Queue Triads */
+ for (j = 0; j < nv->rxt_count; j++, t++) {
+ struct fbnic_q_triad *qt = &nv->qt[t];
+
+ fbnic_enable_bdq(&qt->sub0, &qt->sub1);
+ fbnic_config_drop_mode_rcq(nv, &qt->cmpl);
+ fbnic_enable_rcq(nv, &qt->cmpl);
+ }
+}
+
void fbnic_enable(struct fbnic_net *fbn)
{
struct fbnic_dev *fbd = fbn->fbd;
int i;
- for (i = 0; i < fbn->num_napi; i++) {
- struct fbnic_napi_vector *nv = fbn->napi[i];
- int j, t;
-
- /* Setup Tx Queue Triads */
- for (t = 0; t < nv->txt_count; t++) {
- struct fbnic_q_triad *qt = &nv->qt[t];
-
- fbnic_enable_twq0(&qt->sub0);
- fbnic_enable_twq1(&qt->sub1);
- fbnic_enable_tcq(nv, &qt->cmpl);
- }
-
- /* Setup Rx Queue Triads */
- for (j = 0; j < nv->rxt_count; j++, t++) {
- struct fbnic_q_triad *qt = &nv->qt[t];
-
- fbnic_enable_bdq(&qt->sub0, &qt->sub1);
- fbnic_config_drop_mode_rcq(nv, &qt->cmpl);
- fbnic_enable_rcq(nv, &qt->cmpl);
- }
- }
+ for (i = 0; i < fbn->num_napi; i++)
+ __fbnic_nv_enable(fbn->napi[i]);
fbnic_wrfl(fbd);
}
--
2.51.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH net-next v2 09/14] eth: fbnic: split fbnic_fill()
2025-08-29 1:22 [PATCH net-next v2 00/14] eth: fbnic: support queue API and zero-copy Rx Jakub Kicinski
` (7 preceding siblings ...)
2025-08-29 1:22 ` [PATCH net-next v2 08/14] eth: fbnic: split fbnic_enable() Jakub Kicinski
@ 2025-08-29 1:22 ` Jakub Kicinski
2025-08-29 1:23 ` [PATCH net-next v2 10/14] net: add helper to pre-check if PP for an Rx queue will be unreadable Jakub Kicinski
` (4 subsequent siblings)
13 siblings, 0 replies; 18+ messages in thread
From: Jakub Kicinski @ 2025-08-29 1:22 UTC (permalink / raw)
To: davem
Cc: netdev, edumazet, pabeni, andrew+netdev, horms, almasrymina,
tariqt, dtatulea, hawk, ilias.apalodimas, alexanderduyck, sdf,
Jakub Kicinski
Factor out handling a single nv from fbnic_fill() to make
it reusable for queue ops.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
drivers/net/ethernet/meta/fbnic/fbnic_txrx.c | 33 +++++++++++---------
1 file changed, 18 insertions(+), 15 deletions(-)
diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
index 38dd1afb7005..7694b25ef77d 100644
--- a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
+++ b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
@@ -2348,25 +2348,28 @@ void fbnic_flush(struct fbnic_net *fbn)
fbnic_nv_flush(fbn->napi[i]);
}
+static void fbnic_nv_fill(struct fbnic_napi_vector *nv)
+{
+ int j, t;
+
+ /* Configure NAPI mapping and populate pages
+ * in the BDQ rings to use for Rx
+ */
+ for (j = 0, t = nv->txt_count; j < nv->rxt_count; j++, t++) {
+ struct fbnic_q_triad *qt = &nv->qt[t];
+
+ /* Populate the header and payload BDQs */
+ fbnic_fill_bdq(&qt->sub0);
+ fbnic_fill_bdq(&qt->sub1);
+ }
+}
+
void fbnic_fill(struct fbnic_net *fbn)
{
int i;
- for (i = 0; i < fbn->num_napi; i++) {
- struct fbnic_napi_vector *nv = fbn->napi[i];
- int j, t;
-
- /* Configure NAPI mapping and populate pages
- * in the BDQ rings to use for Rx
- */
- for (j = 0, t = nv->txt_count; j < nv->rxt_count; j++, t++) {
- struct fbnic_q_triad *qt = &nv->qt[t];
-
- /* Populate the header and payload BDQs */
- fbnic_fill_bdq(&qt->sub0);
- fbnic_fill_bdq(&qt->sub1);
- }
- }
+ for (i = 0; i < fbn->num_napi; i++)
+ fbnic_nv_fill(fbn->napi[i]);
}
static void fbnic_enable_twq0(struct fbnic_ring *twq)
--
2.51.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH net-next v2 10/14] net: add helper to pre-check if PP for an Rx queue will be unreadable
2025-08-29 1:22 [PATCH net-next v2 00/14] eth: fbnic: support queue API and zero-copy Rx Jakub Kicinski
` (8 preceding siblings ...)
2025-08-29 1:22 ` [PATCH net-next v2 09/14] eth: fbnic: split fbnic_fill() Jakub Kicinski
@ 2025-08-29 1:23 ` Jakub Kicinski
2025-08-29 21:56 ` Mina Almasry
2025-08-29 1:23 ` [PATCH net-next v2 11/14] eth: fbnic: allocate unreadable page pool for the payloads Jakub Kicinski
` (3 subsequent siblings)
13 siblings, 1 reply; 18+ messages in thread
From: Jakub Kicinski @ 2025-08-29 1:23 UTC (permalink / raw)
To: davem
Cc: netdev, edumazet, pabeni, andrew+netdev, horms, almasrymina,
tariqt, dtatulea, hawk, ilias.apalodimas, alexanderduyck, sdf,
Jakub Kicinski
mlx5 pokes into the rxq state to check if the queue has a memory
provider, and therefore whether it may produce unreadable mem.
Add a helper for doing this in the page pool API. fbnic will want
a similar thing (tho, for a slightly different reason).
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
v2:
- make the helper an rxq helper rather than PP helper
v1: https://lore.kernel.org/20250820025704.166248-12-kuba@kernel.org
---
include/net/netdev_queues.h | 2 ++
include/net/page_pool/helpers.h | 12 ++++++++++++
drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 9 +--------
net/core/netdev_rx_queue.c | 9 +++++++++
4 files changed, 24 insertions(+), 8 deletions(-)
diff --git a/include/net/netdev_queues.h b/include/net/netdev_queues.h
index b9d02bc65c97..cd00e0406cf4 100644
--- a/include/net/netdev_queues.h
+++ b/include/net/netdev_queues.h
@@ -151,6 +151,8 @@ struct netdev_queue_mgmt_ops {
int idx);
};
+bool netif_rxq_has_unreadable_mp(struct net_device *dev, int idx);
+
/**
* DOC: Lockless queue stopping / waking helpers.
*
diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h
index aa3719f28216..3247026e096a 100644
--- a/include/net/page_pool/helpers.h
+++ b/include/net/page_pool/helpers.h
@@ -505,6 +505,18 @@ static inline void page_pool_nid_changed(struct page_pool *pool, int new_nid)
page_pool_update_nid(pool, new_nid);
}
+/**
+ * page_pool_is_unreadable() - will allocated buffers be unreadable for the CPU
+ * @pool: queried page pool
+ *
+ * Check if page pool will return buffers which are unreadable to the CPU /
+ * kernel. This will only be the case if user space bound a memory provider (mp)
+ * which returns unreadable memory to the queue served by the page pool.
+ * If %PP_FLAG_ALLOW_UNREADABLE_NETMEM was set but there is no mp bound
+ * this helper will return false. See also netif_rxq_has_unreadable_mp().
+ *
+ * Return: true if memory allocated by the page pool may be unreadable
+ */
static inline bool page_pool_is_unreadable(struct page_pool *pool)
{
return !!pool->mp_ops;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index 0e48065a46eb..0633fe413e56 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -777,13 +777,6 @@ static void mlx5e_rq_shampo_hd_info_free(struct mlx5e_rq *rq)
bitmap_free(rq->mpwqe.shampo->bitmap);
}
-static bool mlx5_rq_needs_separate_hd_pool(struct mlx5e_rq *rq)
-{
- struct netdev_rx_queue *rxq = __netif_get_rx_queue(rq->netdev, rq->ix);
-
- return !!rxq->mp_params.mp_ops;
-}
-
static int mlx5_rq_shampo_alloc(struct mlx5_core_dev *mdev,
struct mlx5e_params *params,
struct mlx5e_rq_param *rqp,
@@ -822,7 +815,7 @@ static int mlx5_rq_shampo_alloc(struct mlx5_core_dev *mdev,
hd_pool_size = (rq->mpwqe.shampo->hd_per_wqe * wq_size) /
MLX5E_SHAMPO_WQ_HEADER_PER_PAGE;
- if (mlx5_rq_needs_separate_hd_pool(rq)) {
+ if (netif_rxq_has_unreadable_mp(rq->netdev, rq->ix)) {
/* Separate page pool for shampo headers */
struct page_pool_params pp_params = { };
diff --git a/net/core/netdev_rx_queue.c b/net/core/netdev_rx_queue.c
index 3bf1151d8061..c7d9341b7630 100644
--- a/net/core/netdev_rx_queue.c
+++ b/net/core/netdev_rx_queue.c
@@ -9,6 +9,15 @@
#include "page_pool_priv.h"
+/* See also page_pool_is_unreadable() */
+bool netif_rxq_has_unreadable_mp(struct net_device *dev, int idx)
+{
+ struct netdev_rx_queue *rxq = __netif_get_rx_queue(dev, idx);
+
+ return !!rxq->mp_params.mp_ops;
+}
+EXPORT_SYMBOL(netif_rxq_has_unreadable_mp);
+
int netdev_rx_queue_restart(struct net_device *dev, unsigned int rxq_idx)
{
struct netdev_rx_queue *rxq = __netif_get_rx_queue(dev, rxq_idx);
--
2.51.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH net-next v2 11/14] eth: fbnic: allocate unreadable page pool for the payloads
2025-08-29 1:22 [PATCH net-next v2 00/14] eth: fbnic: support queue API and zero-copy Rx Jakub Kicinski
` (9 preceding siblings ...)
2025-08-29 1:23 ` [PATCH net-next v2 10/14] net: add helper to pre-check if PP for an Rx queue will be unreadable Jakub Kicinski
@ 2025-08-29 1:23 ` Jakub Kicinski
2025-08-29 1:23 ` [PATCH net-next v2 12/14] eth: fbnic: defer page pool recycling activation to queue start Jakub Kicinski
` (2 subsequent siblings)
13 siblings, 0 replies; 18+ messages in thread
From: Jakub Kicinski @ 2025-08-29 1:23 UTC (permalink / raw)
To: davem
Cc: netdev, edumazet, pabeni, andrew+netdev, horms, almasrymina,
tariqt, dtatulea, hawk, ilias.apalodimas, alexanderduyck, sdf,
Jakub Kicinski
Allow allocating a page pool with unreadable memory for the payload
ring (sub1). We need to provide the queue ID so that the memory provider
can match the PP. Use the appropriate page pool DMA sync helper.
For unreadable mem the direction has to be FROM_DEVICE. The default
is BIDIR for XDP, but obviously unreadable mem is not compatible
with XDP in the first place, so that's fine. While at it remove
the define for page pool flags.
The rxq_idx is passed to fbnic_alloc_rx_qt_resources() explicitly
to make it easy to allocate page pools without NAPI (see the patch
after the next).
Reviewed-by: Mina Almasry <almasrymina@google.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
v2:
- update commit msg
v1: https://lore.kernel.org/20250820025704.166248-13-kuba@kernel.org
---
drivers/net/ethernet/meta/fbnic/fbnic_txrx.c | 31 +++++++++++++-------
1 file changed, 21 insertions(+), 10 deletions(-)
diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
index 7694b25ef77d..2727cc037663 100644
--- a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
+++ b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
@@ -997,9 +997,8 @@ static void fbnic_add_rx_frag(struct fbnic_napi_vector *nv, u64 rcd,
FBNIC_BD_FRAG_SIZE;
/* Sync DMA buffer */
- dma_sync_single_range_for_cpu(nv->dev,
- page_pool_get_dma_addr_netmem(netmem),
- pg_off, truesize, DMA_BIDIRECTIONAL);
+ page_pool_dma_sync_netmem_for_cpu(qt->sub1.page_pool, netmem,
+ pg_off, truesize);
added = xdp_buff_add_frag(&pkt->buff, netmem, pg_off, len, truesize);
if (unlikely(!added)) {
@@ -1515,16 +1514,14 @@ void fbnic_free_napi_vectors(struct fbnic_net *fbn)
fbnic_free_napi_vector(fbn, fbn->napi[i]);
}
-#define FBNIC_PAGE_POOL_FLAGS \
- (PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV)
-
static int
fbnic_alloc_qt_page_pools(struct fbnic_net *fbn, struct fbnic_napi_vector *nv,
- struct fbnic_q_triad *qt)
+ struct fbnic_q_triad *qt, unsigned int rxq_idx)
{
struct page_pool_params pp_params = {
.order = 0,
- .flags = FBNIC_PAGE_POOL_FLAGS,
+ .flags = PP_FLAG_DMA_MAP |
+ PP_FLAG_DMA_SYNC_DEV,
.pool_size = fbn->hpq_size + fbn->ppq_size,
.nid = NUMA_NO_NODE,
.dev = nv->dev,
@@ -1533,6 +1530,7 @@ fbnic_alloc_qt_page_pools(struct fbnic_net *fbn, struct fbnic_napi_vector *nv,
.max_len = PAGE_SIZE,
.napi = &nv->napi,
.netdev = fbn->netdev,
+ .queue_idx = rxq_idx,
};
struct page_pool *pp;
@@ -1553,10 +1551,23 @@ fbnic_alloc_qt_page_pools(struct fbnic_net *fbn, struct fbnic_napi_vector *nv,
return PTR_ERR(pp);
qt->sub0.page_pool = pp;
- page_pool_get(pp);
+ if (netif_rxq_has_unreadable_mp(fbn->netdev, rxq_idx)) {
+ pp_params.flags |= PP_FLAG_ALLOW_UNREADABLE_NETMEM;
+ pp_params.dma_dir = DMA_FROM_DEVICE;
+
+ pp = page_pool_create(&pp_params);
+ if (IS_ERR(pp))
+ goto err_destroy_sub0;
+ } else {
+ page_pool_get(pp);
+ }
qt->sub1.page_pool = pp;
return 0;
+
+err_destroy_sub0:
+ page_pool_destroy(pp);
+ return PTR_ERR(pp);
}
static void fbnic_ring_init(struct fbnic_ring *ring, u32 __iomem *doorbell,
@@ -1961,7 +1972,7 @@ static int fbnic_alloc_rx_qt_resources(struct fbnic_net *fbn,
struct device *dev = fbn->netdev->dev.parent;
int err;
- err = fbnic_alloc_qt_page_pools(fbn, nv, qt);
+ err = fbnic_alloc_qt_page_pools(fbn, nv, qt, qt->cmpl.q_idx);
if (err)
return err;
--
2.51.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH net-next v2 12/14] eth: fbnic: defer page pool recycling activation to queue start
2025-08-29 1:22 [PATCH net-next v2 00/14] eth: fbnic: support queue API and zero-copy Rx Jakub Kicinski
` (10 preceding siblings ...)
2025-08-29 1:23 ` [PATCH net-next v2 11/14] eth: fbnic: allocate unreadable page pool for the payloads Jakub Kicinski
@ 2025-08-29 1:23 ` Jakub Kicinski
2025-08-29 21:57 ` Mina Almasry
2025-08-29 1:23 ` [PATCH net-next v2 13/14] eth: fbnic: don't pass NAPI into pp alloc Jakub Kicinski
2025-08-29 1:23 ` [PATCH net-next v2 14/14] eth: fbnic: support queue ops / zero-copy Rx Jakub Kicinski
13 siblings, 1 reply; 18+ messages in thread
From: Jakub Kicinski @ 2025-08-29 1:23 UTC (permalink / raw)
To: davem
Cc: netdev, edumazet, pabeni, andrew+netdev, horms, almasrymina,
tariqt, dtatulea, hawk, ilias.apalodimas, alexanderduyck, sdf,
Jakub Kicinski
We need to be more careful about when direct page pool recycling
is enabled in preparation for queue ops support. Don't set the
NAPI pointer, call page_pool_enable_direct_recycling() from
the function that activates the queue (once the config can
no longer fail).
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
drivers/net/ethernet/meta/fbnic/fbnic_txrx.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
index 2727cc037663..f5b83b6e1cc3 100644
--- a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
+++ b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
@@ -1528,7 +1528,6 @@ fbnic_alloc_qt_page_pools(struct fbnic_net *fbn, struct fbnic_napi_vector *nv,
.dma_dir = DMA_BIDIRECTIONAL,
.offset = 0,
.max_len = PAGE_SIZE,
- .napi = &nv->napi,
.netdev = fbn->netdev,
.queue_idx = rxq_idx,
};
@@ -2615,6 +2614,11 @@ static void __fbnic_nv_enable(struct fbnic_napi_vector *nv)
for (j = 0; j < nv->rxt_count; j++, t++) {
struct fbnic_q_triad *qt = &nv->qt[t];
+ page_pool_enable_direct_recycling(qt->sub0.page_pool,
+ &nv->napi);
+ page_pool_enable_direct_recycling(qt->sub1.page_pool,
+ &nv->napi);
+
fbnic_enable_bdq(&qt->sub0, &qt->sub1);
fbnic_config_drop_mode_rcq(nv, &qt->cmpl);
fbnic_enable_rcq(nv, &qt->cmpl);
--
2.51.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH net-next v2 13/14] eth: fbnic: don't pass NAPI into pp alloc
2025-08-29 1:22 [PATCH net-next v2 00/14] eth: fbnic: support queue API and zero-copy Rx Jakub Kicinski
` (11 preceding siblings ...)
2025-08-29 1:23 ` [PATCH net-next v2 12/14] eth: fbnic: defer page pool recycling activation to queue start Jakub Kicinski
@ 2025-08-29 1:23 ` Jakub Kicinski
2025-08-29 1:23 ` [PATCH net-next v2 14/14] eth: fbnic: support queue ops / zero-copy Rx Jakub Kicinski
13 siblings, 0 replies; 18+ messages in thread
From: Jakub Kicinski @ 2025-08-29 1:23 UTC (permalink / raw)
To: davem
Cc: netdev, edumazet, pabeni, andrew+netdev, horms, almasrymina,
tariqt, dtatulea, hawk, ilias.apalodimas, alexanderduyck, sdf,
Jakub Kicinski
Queue API may ask us to allocate page pools when the device
is down, to validate that we ingested a memory provider binding.
Don't require NAPI to be passed to fbnic_alloc_qt_page_pools(),
to make calling fbnic_alloc_qt_page_pools() without NAPI possible.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
drivers/net/ethernet/meta/fbnic/fbnic_txrx.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
index f5b83b6e1cc3..2e8ea3e01eba 100644
--- a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
+++ b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
@@ -1515,8 +1515,8 @@ void fbnic_free_napi_vectors(struct fbnic_net *fbn)
}
static int
-fbnic_alloc_qt_page_pools(struct fbnic_net *fbn, struct fbnic_napi_vector *nv,
- struct fbnic_q_triad *qt, unsigned int rxq_idx)
+fbnic_alloc_qt_page_pools(struct fbnic_net *fbn, struct fbnic_q_triad *qt,
+ unsigned int rxq_idx)
{
struct page_pool_params pp_params = {
.order = 0,
@@ -1524,7 +1524,7 @@ fbnic_alloc_qt_page_pools(struct fbnic_net *fbn, struct fbnic_napi_vector *nv,
PP_FLAG_DMA_SYNC_DEV,
.pool_size = fbn->hpq_size + fbn->ppq_size,
.nid = NUMA_NO_NODE,
- .dev = nv->dev,
+ .dev = fbn->netdev->dev.parent,
.dma_dir = DMA_BIDIRECTIONAL,
.offset = 0,
.max_len = PAGE_SIZE,
@@ -1971,7 +1971,7 @@ static int fbnic_alloc_rx_qt_resources(struct fbnic_net *fbn,
struct device *dev = fbn->netdev->dev.parent;
int err;
- err = fbnic_alloc_qt_page_pools(fbn, nv, qt, qt->cmpl.q_idx);
+ err = fbnic_alloc_qt_page_pools(fbn, qt, qt->cmpl.q_idx);
if (err)
return err;
--
2.51.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH net-next v2 14/14] eth: fbnic: support queue ops / zero-copy Rx
2025-08-29 1:22 [PATCH net-next v2 00/14] eth: fbnic: support queue API and zero-copy Rx Jakub Kicinski
` (12 preceding siblings ...)
2025-08-29 1:23 ` [PATCH net-next v2 13/14] eth: fbnic: don't pass NAPI into pp alloc Jakub Kicinski
@ 2025-08-29 1:23 ` Jakub Kicinski
2025-08-29 22:09 ` Mina Almasry
13 siblings, 1 reply; 18+ messages in thread
From: Jakub Kicinski @ 2025-08-29 1:23 UTC (permalink / raw)
To: davem
Cc: netdev, edumazet, pabeni, andrew+netdev, horms, almasrymina,
tariqt, dtatulea, hawk, ilias.apalodimas, alexanderduyck, sdf,
Jakub Kicinski
Support queue ops. fbnic doesn't shut down the entire device
just to restart a single queue.
./tools/testing/selftests/drivers/net/hw/iou-zcrx.py
TAP version 13
1..3
ok 1 iou-zcrx.test_zcrx
ok 2 iou-zcrx.test_zcrx_oneshot
ok 3 iou-zcrx.test_zcrx_rss
# Totals: pass:3 fail:0 xfail:0 xpass:0 skip:0 error:0
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
drivers/net/ethernet/meta/fbnic/fbnic_txrx.h | 2 +
.../net/ethernet/meta/fbnic/fbnic_netdev.c | 3 +-
drivers/net/ethernet/meta/fbnic/fbnic_txrx.c | 171 ++++++++++++++++++
3 files changed, 174 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.h b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.h
index 58ae7f9c8f54..31fac0ba0902 100644
--- a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.h
+++ b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.h
@@ -156,6 +156,8 @@ struct fbnic_napi_vector {
struct fbnic_q_triad qt[];
};
+extern const struct netdev_queue_mgmt_ops fbnic_queue_mgmt_ops;
+
netdev_tx_t fbnic_xmit_frame(struct sk_buff *skb, struct net_device *dev);
netdev_features_t
fbnic_features_check(struct sk_buff *skb, struct net_device *dev,
diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_netdev.c b/drivers/net/ethernet/meta/fbnic/fbnic_netdev.c
index 1d9d175e8f8c..024007830635 100644
--- a/drivers/net/ethernet/meta/fbnic/fbnic_netdev.c
+++ b/drivers/net/ethernet/meta/fbnic/fbnic_netdev.c
@@ -707,11 +707,10 @@ struct net_device *fbnic_netdev_alloc(struct fbnic_dev *fbd)
netdev->netdev_ops = &fbnic_netdev_ops;
netdev->stat_ops = &fbnic_stat_ops;
+ netdev->queue_mgmt_ops = &fbnic_queue_mgmt_ops;
fbnic_set_ethtool_ops(netdev);
- netdev->request_ops_lock = true;
-
fbn = netdev_priv(netdev);
fbn->netdev = netdev;
diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
index 2e8ea3e01eba..493f7f4df013 100644
--- a/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
+++ b/drivers/net/ethernet/meta/fbnic/fbnic_txrx.c
@@ -2212,6 +2212,13 @@ static void __fbnic_nv_disable(struct fbnic_napi_vector *nv)
}
}
+static void
+fbnic_nv_disable(struct fbnic_net *fbn, struct fbnic_napi_vector *nv)
+{
+ __fbnic_nv_disable(nv);
+ fbnic_wrfl(fbn->fbd);
+}
+
void fbnic_disable(struct fbnic_net *fbn)
{
struct fbnic_dev *fbd = fbn->fbd;
@@ -2307,6 +2314,44 @@ int fbnic_wait_all_queues_idle(struct fbnic_dev *fbd, bool may_fail)
return err;
}
+static int
+fbnic_wait_queue_idle(struct fbnic_net *fbn, bool rx, unsigned int idx)
+{
+ static const unsigned int tx_regs[] = {
+ FBNIC_QM_TWQ_IDLE(0), FBNIC_QM_TQS_IDLE(0),
+ FBNIC_QM_TDE_IDLE(0), FBNIC_QM_TCQ_IDLE(0),
+ }, rx_regs[] = {
+ FBNIC_QM_HPQ_IDLE(0), FBNIC_QM_PPQ_IDLE(0),
+ FBNIC_QM_RCQ_IDLE(0),
+ };
+ struct fbnic_dev *fbd = fbn->fbd;
+ unsigned int val, mask, off;
+ const unsigned int *regs;
+ unsigned int reg_cnt;
+ int i, err;
+
+ regs = rx ? rx_regs : tx_regs;
+ reg_cnt = rx ? ARRAY_SIZE(rx_regs) : ARRAY_SIZE(tx_regs);
+
+ off = idx / 32;
+ mask = BIT(idx % 32);
+
+ for (i = 0; i < reg_cnt; i++) {
+ err = read_poll_timeout_atomic(fbnic_rd32, val, val & mask,
+ 2, 500000, false,
+ fbd, regs[i] + off);
+ if (err) {
+ netdev_err(fbd->netdev,
+ "wait for queue %s%d idle failed 0x%04x(%d): %08x (mask: %08x)\n",
+ rx ? "Rx" : "Tx", idx, regs[i] + off, i,
+ val, mask);
+ return err;
+ }
+ }
+
+ return 0;
+}
+
static void fbnic_nv_flush(struct fbnic_napi_vector *nv)
{
int j, t;
@@ -2625,6 +2670,12 @@ static void __fbnic_nv_enable(struct fbnic_napi_vector *nv)
}
}
+static void fbnic_nv_enable(struct fbnic_net *fbn, struct fbnic_napi_vector *nv)
+{
+ __fbnic_nv_enable(nv);
+ fbnic_wrfl(fbn->fbd);
+}
+
void fbnic_enable(struct fbnic_net *fbn)
{
struct fbnic_dev *fbd = fbn->fbd;
@@ -2703,3 +2754,123 @@ void fbnic_napi_depletion_check(struct net_device *netdev)
fbnic_wrfl(fbd);
}
+
+static int fbnic_queue_mem_alloc(struct net_device *dev, void *qmem, int idx)
+{
+ struct fbnic_net *fbn = netdev_priv(dev);
+ const struct fbnic_q_triad *real;
+ struct fbnic_q_triad *qt = qmem;
+ struct fbnic_napi_vector *nv;
+
+ if (!netif_running(dev))
+ return fbnic_alloc_qt_page_pools(fbn, qt, idx);
+
+ real = container_of(fbn->rx[idx], struct fbnic_q_triad, cmpl);
+ nv = fbn->napi[idx % fbn->num_napi];
+
+ fbnic_ring_init(&qt->sub0, real->sub0.doorbell, real->sub0.q_idx,
+ real->sub0.flags);
+ fbnic_ring_init(&qt->sub1, real->sub1.doorbell, real->sub1.q_idx,
+ real->sub1.flags);
+ fbnic_ring_init(&qt->cmpl, real->cmpl.doorbell, real->cmpl.q_idx,
+ real->cmpl.flags);
+
+ return fbnic_alloc_rx_qt_resources(fbn, nv, qt);
+}
+
+static void fbnic_queue_mem_free(struct net_device *dev, void *qmem)
+{
+ struct fbnic_net *fbn = netdev_priv(dev);
+ struct fbnic_q_triad *qt = qmem;
+
+ if (!netif_running(dev))
+ fbnic_free_qt_page_pools(qt);
+ else
+ fbnic_free_qt_resources(fbn, qt);
+}
+
+static void __fbnic_nv_restart(struct fbnic_net *fbn,
+ struct fbnic_napi_vector *nv)
+{
+ struct fbnic_dev *fbd = fbn->fbd;
+ int i;
+
+ fbnic_nv_enable(fbn, nv);
+ fbnic_nv_fill(nv);
+
+ napi_enable_locked(&nv->napi);
+ fbnic_nv_irq_enable(nv);
+ fbnic_wr32(fbd, FBNIC_INTR_SET(nv->v_idx / 32), BIT(nv->v_idx % 32));
+ fbnic_wrfl(fbd);
+
+ for (i = 0; i < nv->txt_count; i++)
+ netif_wake_subqueue(fbn->netdev, nv->qt[i].sub0.q_idx);
+}
+
+static int fbnic_queue_start(struct net_device *dev, void *qmem, int idx)
+{
+ struct fbnic_net *fbn = netdev_priv(dev);
+ struct fbnic_napi_vector *nv;
+ struct fbnic_q_triad *real;
+
+ real = container_of(fbn->rx[idx], struct fbnic_q_triad, cmpl);
+ nv = fbn->napi[idx % fbn->num_napi];
+
+ fbnic_aggregate_ring_rx_counters(fbn, &real->sub0);
+ fbnic_aggregate_ring_rx_counters(fbn, &real->sub1);
+ fbnic_aggregate_ring_rx_counters(fbn, &real->cmpl);
+
+ memcpy(real, qmem, sizeof(*real));
+
+ __fbnic_nv_restart(fbn, nv);
+
+ return 0;
+}
+
+static int fbnic_queue_stop(struct net_device *dev, void *qmem, int idx)
+{
+ struct fbnic_net *fbn = netdev_priv(dev);
+ const struct fbnic_q_triad *real;
+ struct fbnic_napi_vector *nv;
+ int i, t;
+ int err;
+
+ real = container_of(fbn->rx[idx], struct fbnic_q_triad, cmpl);
+ nv = fbn->napi[idx % fbn->num_napi];
+
+ napi_disable_locked(&nv->napi);
+ fbnic_nv_irq_disable(nv);
+
+ for (i = 0; i < nv->txt_count; i++)
+ netif_stop_subqueue(dev, nv->qt[i].sub0.q_idx);
+ fbnic_nv_disable(fbn, nv);
+
+ for (t = 0; t < nv->txt_count + nv->rxt_count; t++) {
+ err = fbnic_wait_queue_idle(fbn, t >= nv->txt_count,
+ nv->qt[t].sub0.q_idx);
+ if (err)
+ goto err_restart;
+ }
+
+ fbnic_synchronize_irq(fbn->fbd, nv->v_idx);
+ fbnic_nv_flush(nv);
+
+ page_pool_disable_direct_recycling(real->sub0.page_pool);
+ page_pool_disable_direct_recycling(real->sub1.page_pool);
+
+ memcpy(qmem, real, sizeof(*real));
+
+ return 0;
+
+err_restart:
+ __fbnic_nv_restart(fbn, nv);
+ return err;
+}
+
+const struct netdev_queue_mgmt_ops fbnic_queue_mgmt_ops = {
+ .ndo_queue_mem_size = sizeof(struct fbnic_q_triad),
+ .ndo_queue_mem_alloc = fbnic_queue_mem_alloc,
+ .ndo_queue_mem_free = fbnic_queue_mem_free,
+ .ndo_queue_start = fbnic_queue_start,
+ .ndo_queue_stop = fbnic_queue_stop,
+};
--
2.51.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [PATCH net-next v2 10/14] net: add helper to pre-check if PP for an Rx queue will be unreadable
2025-08-29 1:23 ` [PATCH net-next v2 10/14] net: add helper to pre-check if PP for an Rx queue will be unreadable Jakub Kicinski
@ 2025-08-29 21:56 ` Mina Almasry
0 siblings, 0 replies; 18+ messages in thread
From: Mina Almasry @ 2025-08-29 21:56 UTC (permalink / raw)
To: Jakub Kicinski
Cc: davem, netdev, edumazet, pabeni, andrew+netdev, horms, tariqt,
dtatulea, hawk, ilias.apalodimas, alexanderduyck, sdf
On Thu, Aug 28, 2025 at 6:23 PM Jakub Kicinski <kuba@kernel.org> wrote:
>
> mlx5 pokes into the rxq state to check if the queue has a memory
> provider, and therefore whether it may produce unreadable mem.
> Add a helper for doing this in the page pool API. fbnic will want
> a similar thing (tho, for a slightly different reason).
>
> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Reviewed-by: Mina Almasry <almasrymina@google.com>
Only thing is I wonder if we need to ASSERT_RTNL (or the netdev
instance lock equivalent) in the helper to catch data races around
setting/querying mp_ops in future code, but we already don't do that
in existing code and it's not worth the hassle.
--
Thanks,
Mina
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH net-next v2 12/14] eth: fbnic: defer page pool recycling activation to queue start
2025-08-29 1:23 ` [PATCH net-next v2 12/14] eth: fbnic: defer page pool recycling activation to queue start Jakub Kicinski
@ 2025-08-29 21:57 ` Mina Almasry
0 siblings, 0 replies; 18+ messages in thread
From: Mina Almasry @ 2025-08-29 21:57 UTC (permalink / raw)
To: Jakub Kicinski
Cc: davem, netdev, edumazet, pabeni, andrew+netdev, horms, tariqt,
dtatulea, hawk, ilias.apalodimas, alexanderduyck, sdf
On Thu, Aug 28, 2025 at 6:23 PM Jakub Kicinski <kuba@kernel.org> wrote:
>
> We need to be more careful about when direct page pool recycling
> is enabled in preparation for queue ops support. Don't set the
> NAPI pointer, call page_pool_enable_direct_recycling() from
> the function that activates the queue (once the config can
> no longer fail).
>
> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Reviewed-by: Mina Almasry <almasrymina@google.com>
--
Thanks,
Mina
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH net-next v2 14/14] eth: fbnic: support queue ops / zero-copy Rx
2025-08-29 1:23 ` [PATCH net-next v2 14/14] eth: fbnic: support queue ops / zero-copy Rx Jakub Kicinski
@ 2025-08-29 22:09 ` Mina Almasry
0 siblings, 0 replies; 18+ messages in thread
From: Mina Almasry @ 2025-08-29 22:09 UTC (permalink / raw)
To: Jakub Kicinski
Cc: davem, netdev, edumazet, pabeni, andrew+netdev, horms, tariqt,
dtatulea, hawk, ilias.apalodimas, alexanderduyck, sdf
On Thu, Aug 28, 2025 at 6:23 PM Jakub Kicinski <kuba@kernel.org> wrote:
>
> Support queue ops. fbnic doesn't shut down the entire device
> just to restart a single queue.
>
> ./tools/testing/selftests/drivers/net/hw/iou-zcrx.py
> TAP version 13
> 1..3
> ok 1 iou-zcrx.test_zcrx
> ok 2 iou-zcrx.test_zcrx_oneshot
> ok 3 iou-zcrx.test_zcrx_rss
> # Totals: pass:3 fail:0 xfail:0 xpass:0 skip:0 error:0
>
> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
I don't have deep understanding of the internals of the driver but
from queue API interface POV I don't see any issues. FWIW:
Acked-by: Mina Almasry <almasrymina@google.com>
--
Thanks,
Mina
^ permalink raw reply [flat|nested] 18+ messages in thread
end of thread, other threads:[~2025-08-29 22:09 UTC | newest]
Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-29 1:22 [PATCH net-next v2 00/14] eth: fbnic: support queue API and zero-copy Rx Jakub Kicinski
2025-08-29 1:22 ` [PATCH net-next v2 01/14] eth: fbnic: move page pool pointer from NAPI to the ring struct Jakub Kicinski
2025-08-29 1:22 ` [PATCH net-next v2 02/14] eth: fbnic: move xdp_rxq_info_reg() to resource alloc Jakub Kicinski
2025-08-29 1:22 ` [PATCH net-next v2 03/14] eth: fbnic: move page pool alloc to fbnic_alloc_rx_qt_resources() Jakub Kicinski
2025-08-29 1:22 ` [PATCH net-next v2 04/14] eth: fbnic: use netmem_ref where applicable Jakub Kicinski
2025-08-29 1:22 ` [PATCH net-next v2 05/14] eth: fbnic: request ops lock Jakub Kicinski
2025-08-29 1:22 ` [PATCH net-next v2 06/14] eth: fbnic: split fbnic_disable() Jakub Kicinski
2025-08-29 1:22 ` [PATCH net-next v2 07/14] eth: fbnic: split fbnic_flush() Jakub Kicinski
2025-08-29 1:22 ` [PATCH net-next v2 08/14] eth: fbnic: split fbnic_enable() Jakub Kicinski
2025-08-29 1:22 ` [PATCH net-next v2 09/14] eth: fbnic: split fbnic_fill() Jakub Kicinski
2025-08-29 1:23 ` [PATCH net-next v2 10/14] net: add helper to pre-check if PP for an Rx queue will be unreadable Jakub Kicinski
2025-08-29 21:56 ` Mina Almasry
2025-08-29 1:23 ` [PATCH net-next v2 11/14] eth: fbnic: allocate unreadable page pool for the payloads Jakub Kicinski
2025-08-29 1:23 ` [PATCH net-next v2 12/14] eth: fbnic: defer page pool recycling activation to queue start Jakub Kicinski
2025-08-29 21:57 ` Mina Almasry
2025-08-29 1:23 ` [PATCH net-next v2 13/14] eth: fbnic: don't pass NAPI into pp alloc Jakub Kicinski
2025-08-29 1:23 ` [PATCH net-next v2 14/14] eth: fbnic: support queue ops / zero-copy Rx Jakub Kicinski
2025-08-29 22:09 ` Mina Almasry
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).