* [PATCH net-next v3 0/2] bnxt_en: implement netdev_queue_mgmt_ops
@ 2024-06-19 6:29 David Wei
2024-06-19 6:29 ` [PATCH net-next v3 1/2] bnxt_en: split rx ring helpers out from ring helpers David Wei
` (2 more replies)
0 siblings, 3 replies; 10+ messages in thread
From: David Wei @ 2024-06-19 6:29 UTC (permalink / raw)
To: Michael Chan, Andy Gospodarek, Adrian Alvarado, Somnath Kotur,
netdev
Cc: Pavel Begunkov, Jakub Kicinski, David Ahern, David S. Miller,
Eric Dumazet, Paolo Abeni
Implement netdev_queue_mgmt_ops for bnxt added in [1]. This will be used
in the io_uring ZC Rx patchset to configure queues with a custom page
pool w/ a special memory provider for zero copy support.
The first two patches prep the driver, while the final patch adds the
implementation.
Any arbitrary Rx queue can be reset without affecting other queues. V2
and prior of this patchset was thought to only support resetting queues
not in the main RSS context. Upon further testing I realised moving
queues out and calling bnxt_hwrm_vnic_update() wasn't necessary.
I didn't include the netdev core API using this netdev_queue_mgmt_ops
because Mina is adding it in his devmem TCP series [2]. But I'm happy to
include it if folks want to include a user with this series.
I tested this series on BCM957504-N1100FY4 with FW 229.1.123.0. I
manually injected failures at all the places that can return an errno
and confirmed that the device/queue is never left in a broken state.
[1]: https://lore.kernel.org/netdev/20240501232549.1327174-2-shailend@google.com/
[2]: https://lore.kernel.org/netdev/20240607005127.3078656-2-almasrymina@google.com/
---
v3:
- tested w/o bnxt_hwrm_vnic_update() and it works on any queue
- removed unneeded code
v2:
- fix broken build
- remove unused var in bnxt_init_one_rx_ring()
David Wei (2):
bnxt_en: split rx ring helpers out from ring helpers
bnxt_en: implement netdev_queue_mgmt_ops
drivers/net/ethernet/broadcom/bnxt/bnxt.c | 575 ++++++++++++++++++----
1 file changed, 468 insertions(+), 107 deletions(-)
--
2.43.0
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH net-next v3 1/2] bnxt_en: split rx ring helpers out from ring helpers
2024-06-19 6:29 [PATCH net-next v3 0/2] bnxt_en: implement netdev_queue_mgmt_ops David Wei
@ 2024-06-19 6:29 ` David Wei
2024-06-20 16:54 ` Simon Horman
2024-06-19 6:29 ` [PATCH net-next v3 2/2] bnxt_en: implement netdev_queue_mgmt_ops David Wei
2024-06-21 9:20 ` [PATCH net-next v3 0/2] " patchwork-bot+netdevbpf
2 siblings, 1 reply; 10+ messages in thread
From: David Wei @ 2024-06-19 6:29 UTC (permalink / raw)
To: Michael Chan, Andy Gospodarek, Adrian Alvarado, Somnath Kotur,
netdev
Cc: Pavel Begunkov, Jakub Kicinski, David Ahern, David S. Miller,
Eric Dumazet, Paolo Abeni
To prepare for queue API implementation, split rx ring functions out
from ring helpers. These new helpers will be called from queue API
implementation.
Signed-off-by: David Wei <dw@davidwei.uk>
---
drivers/net/ethernet/broadcom/bnxt/bnxt.c | 300 ++++++++++++++--------
1 file changed, 193 insertions(+), 107 deletions(-)
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index 7dc00c0d8992..9e8d5cc32f16 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -3317,37 +3317,12 @@ static void bnxt_free_tx_skbs(struct bnxt *bp)
}
}
-static void bnxt_free_one_rx_ring_skbs(struct bnxt *bp, int ring_nr)
+static void bnxt_free_one_rx_ring(struct bnxt *bp, struct bnxt_rx_ring_info *rxr)
{
- struct bnxt_rx_ring_info *rxr = &bp->rx_ring[ring_nr];
struct pci_dev *pdev = bp->pdev;
- struct bnxt_tpa_idx_map *map;
- int i, max_idx, max_agg_idx;
+ int i, max_idx;
max_idx = bp->rx_nr_pages * RX_DESC_CNT;
- max_agg_idx = bp->rx_agg_nr_pages * RX_DESC_CNT;
- if (!rxr->rx_tpa)
- goto skip_rx_tpa_free;
-
- for (i = 0; i < bp->max_tpa; i++) {
- struct bnxt_tpa_info *tpa_info = &rxr->rx_tpa[i];
- u8 *data = tpa_info->data;
-
- if (!data)
- continue;
-
- dma_unmap_single_attrs(&pdev->dev, tpa_info->mapping,
- bp->rx_buf_use_size, bp->rx_dir,
- DMA_ATTR_WEAK_ORDERING);
-
- tpa_info->data = NULL;
-
- skb_free_frag(data);
- }
-
-skip_rx_tpa_free:
- if (!rxr->rx_buf_ring)
- goto skip_rx_buf_free;
for (i = 0; i < max_idx; i++) {
struct bnxt_sw_rx_bd *rx_buf = &rxr->rx_buf_ring[i];
@@ -3367,12 +3342,15 @@ static void bnxt_free_one_rx_ring_skbs(struct bnxt *bp, int ring_nr)
skb_free_frag(data);
}
}
+}
-skip_rx_buf_free:
- if (!rxr->rx_agg_ring)
- goto skip_rx_agg_free;
+static void bnxt_free_one_rx_agg_ring(struct bnxt *bp, struct bnxt_rx_ring_info *rxr)
+{
+ int i, max_idx;
- for (i = 0; i < max_agg_idx; i++) {
+ max_idx = bp->rx_agg_nr_pages * RX_DESC_CNT;
+
+ for (i = 0; i < max_idx; i++) {
struct bnxt_sw_rx_agg_bd *rx_agg_buf = &rxr->rx_agg_ring[i];
struct page *page = rx_agg_buf->page;
@@ -3384,6 +3362,45 @@ static void bnxt_free_one_rx_ring_skbs(struct bnxt *bp, int ring_nr)
page_pool_recycle_direct(rxr->page_pool, page);
}
+}
+
+static void bnxt_free_one_rx_ring_skbs(struct bnxt *bp, int ring_nr)
+{
+ struct bnxt_rx_ring_info *rxr = &bp->rx_ring[ring_nr];
+ struct pci_dev *pdev = bp->pdev;
+ struct bnxt_tpa_idx_map *map;
+ int i;
+
+ if (!rxr->rx_tpa)
+ goto skip_rx_tpa_free;
+
+ for (i = 0; i < bp->max_tpa; i++) {
+ struct bnxt_tpa_info *tpa_info = &rxr->rx_tpa[i];
+ u8 *data = tpa_info->data;
+
+ if (!data)
+ continue;
+
+ dma_unmap_single_attrs(&pdev->dev, tpa_info->mapping,
+ bp->rx_buf_use_size, bp->rx_dir,
+ DMA_ATTR_WEAK_ORDERING);
+
+ tpa_info->data = NULL;
+
+ skb_free_frag(data);
+ }
+
+skip_rx_tpa_free:
+ if (!rxr->rx_buf_ring)
+ goto skip_rx_buf_free;
+
+ bnxt_free_one_rx_ring(bp, rxr);
+
+skip_rx_buf_free:
+ if (!rxr->rx_agg_ring)
+ goto skip_rx_agg_free;
+
+ bnxt_free_one_rx_agg_ring(bp, rxr);
skip_rx_agg_free:
map = rxr->rx_tpa_idx_map;
@@ -4062,37 +4079,55 @@ static void bnxt_init_rxbd_pages(struct bnxt_ring_struct *ring, u32 type)
}
}
-static int bnxt_alloc_one_rx_ring(struct bnxt *bp, int ring_nr)
+static void bnxt_alloc_one_rx_ring_skb(struct bnxt *bp,
+ struct bnxt_rx_ring_info *rxr,
+ int ring_nr)
{
- struct bnxt_rx_ring_info *rxr = &bp->rx_ring[ring_nr];
- struct net_device *dev = bp->dev;
u32 prod;
int i;
prod = rxr->rx_prod;
for (i = 0; i < bp->rx_ring_size; i++) {
if (bnxt_alloc_rx_data(bp, rxr, prod, GFP_KERNEL)) {
- netdev_warn(dev, "init'ed rx ring %d with %d/%d skbs only\n",
+ netdev_warn(bp->dev, "init'ed rx ring %d with %d/%d skbs only\n",
ring_nr, i, bp->rx_ring_size);
break;
}
prod = NEXT_RX(prod);
}
rxr->rx_prod = prod;
+}
- if (!(bp->flags & BNXT_FLAG_AGG_RINGS))
- return 0;
+static void bnxt_alloc_one_rx_ring_page(struct bnxt *bp,
+ struct bnxt_rx_ring_info *rxr,
+ int ring_nr)
+{
+ u32 prod;
+ int i;
prod = rxr->rx_agg_prod;
for (i = 0; i < bp->rx_agg_ring_size; i++) {
if (bnxt_alloc_rx_page(bp, rxr, prod, GFP_KERNEL)) {
- netdev_warn(dev, "init'ed rx ring %d with %d/%d pages only\n",
+ netdev_warn(bp->dev, "init'ed rx ring %d with %d/%d pages only\n",
ring_nr, i, bp->rx_ring_size);
break;
}
prod = NEXT_RX_AGG(prod);
}
rxr->rx_agg_prod = prod;
+}
+
+static int bnxt_alloc_one_rx_ring(struct bnxt *bp, int ring_nr)
+{
+ struct bnxt_rx_ring_info *rxr = &bp->rx_ring[ring_nr];
+ int i;
+
+ bnxt_alloc_one_rx_ring_skb(bp, rxr, ring_nr);
+
+ if (!(bp->flags & BNXT_FLAG_AGG_RINGS))
+ return 0;
+
+ bnxt_alloc_one_rx_ring_page(bp, rxr, ring_nr);
if (rxr->rx_tpa) {
dma_addr_t mapping;
@@ -4111,9 +4146,9 @@ static int bnxt_alloc_one_rx_ring(struct bnxt *bp, int ring_nr)
return 0;
}
-static int bnxt_init_one_rx_ring(struct bnxt *bp, int ring_nr)
+static void bnxt_init_one_rx_ring_rxbd(struct bnxt *bp,
+ struct bnxt_rx_ring_info *rxr)
{
- struct bnxt_rx_ring_info *rxr;
struct bnxt_ring_struct *ring;
u32 type;
@@ -4123,28 +4158,43 @@ static int bnxt_init_one_rx_ring(struct bnxt *bp, int ring_nr)
if (NET_IP_ALIGN == 2)
type |= RX_BD_FLAGS_SOP;
- rxr = &bp->rx_ring[ring_nr];
ring = &rxr->rx_ring_struct;
bnxt_init_rxbd_pages(ring, type);
-
- netif_queue_set_napi(bp->dev, ring_nr, NETDEV_QUEUE_TYPE_RX,
- &rxr->bnapi->napi);
-
- if (BNXT_RX_PAGE_MODE(bp) && bp->xdp_prog) {
- bpf_prog_add(bp->xdp_prog, 1);
- rxr->xdp_prog = bp->xdp_prog;
- }
ring->fw_ring_id = INVALID_HW_RING_ID;
+}
+
+static void bnxt_init_one_rx_agg_ring_rxbd(struct bnxt *bp,
+ struct bnxt_rx_ring_info *rxr)
+{
+ struct bnxt_ring_struct *ring;
+ u32 type;
ring = &rxr->rx_agg_ring_struct;
ring->fw_ring_id = INVALID_HW_RING_ID;
-
if ((bp->flags & BNXT_FLAG_AGG_RINGS)) {
type = ((u32)BNXT_RX_PAGE_SIZE << RX_BD_LEN_SHIFT) |
RX_BD_TYPE_RX_AGG_BD | RX_BD_FLAGS_SOP;
bnxt_init_rxbd_pages(ring, type);
}
+}
+
+static int bnxt_init_one_rx_ring(struct bnxt *bp, int ring_nr)
+{
+ struct bnxt_rx_ring_info *rxr;
+
+ rxr = &bp->rx_ring[ring_nr];
+ bnxt_init_one_rx_ring_rxbd(bp, rxr);
+
+ netif_queue_set_napi(bp->dev, ring_nr, NETDEV_QUEUE_TYPE_RX,
+ &rxr->bnapi->napi);
+
+ if (BNXT_RX_PAGE_MODE(bp) && bp->xdp_prog) {
+ bpf_prog_add(bp->xdp_prog, 1);
+ rxr->xdp_prog = bp->xdp_prog;
+ }
+
+ bnxt_init_one_rx_agg_ring_rxbd(bp, rxr);
return bnxt_alloc_one_rx_ring(bp, ring_nr);
}
@@ -6869,6 +6919,48 @@ static void bnxt_set_db(struct bnxt *bp, struct bnxt_db_info *db, u32 ring_type,
bnxt_set_db_mask(bp, db, ring_type);
}
+static int bnxt_hwrm_rx_ring_alloc(struct bnxt *bp,
+ struct bnxt_rx_ring_info *rxr)
+{
+ struct bnxt_ring_struct *ring = &rxr->rx_ring_struct;
+ struct bnxt_napi *bnapi = rxr->bnapi;
+ u32 type = HWRM_RING_ALLOC_RX;
+ u32 map_idx = bnapi->index;
+ int rc;
+
+ rc = hwrm_ring_alloc_send_msg(bp, ring, type, map_idx);
+ if (rc)
+ return rc;
+
+ bnxt_set_db(bp, &rxr->rx_db, type, map_idx, ring->fw_ring_id);
+ bp->grp_info[map_idx].rx_fw_ring_id = ring->fw_ring_id;
+
+ return 0;
+}
+
+static int bnxt_hwrm_rx_agg_ring_alloc(struct bnxt *bp,
+ struct bnxt_rx_ring_info *rxr)
+{
+ struct bnxt_ring_struct *ring = &rxr->rx_agg_ring_struct;
+ u32 type = HWRM_RING_ALLOC_AGG;
+ u32 grp_idx = ring->grp_idx;
+ u32 map_idx;
+ int rc;
+
+ map_idx = grp_idx + bp->rx_nr_rings;
+ rc = hwrm_ring_alloc_send_msg(bp, ring, type, map_idx);
+ if (rc)
+ return rc;
+
+ bnxt_set_db(bp, &rxr->rx_agg_db, type, map_idx,
+ ring->fw_ring_id);
+ bnxt_db_write(bp, &rxr->rx_agg_db, rxr->rx_agg_prod);
+ bnxt_db_write(bp, &rxr->rx_db, rxr->rx_prod);
+ bp->grp_info[grp_idx].agg_fw_ring_id = ring->fw_ring_id;
+
+ return 0;
+}
+
static int bnxt_hwrm_ring_alloc(struct bnxt *bp)
{
bool agg_rings = !!(bp->flags & BNXT_FLAG_AGG_RINGS);
@@ -6934,24 +7026,21 @@ static int bnxt_hwrm_ring_alloc(struct bnxt *bp)
bnxt_set_db(bp, &txr->tx_db, type, map_idx, ring->fw_ring_id);
}
- type = HWRM_RING_ALLOC_RX;
for (i = 0; i < bp->rx_nr_rings; i++) {
struct bnxt_rx_ring_info *rxr = &bp->rx_ring[i];
- struct bnxt_ring_struct *ring = &rxr->rx_ring_struct;
- struct bnxt_napi *bnapi = rxr->bnapi;
- u32 map_idx = bnapi->index;
- rc = hwrm_ring_alloc_send_msg(bp, ring, type, map_idx);
+ rc = bnxt_hwrm_rx_ring_alloc(bp, rxr);
if (rc)
goto err_out;
- bnxt_set_db(bp, &rxr->rx_db, type, map_idx, ring->fw_ring_id);
/* If we have agg rings, post agg buffers first. */
if (!agg_rings)
bnxt_db_write(bp, &rxr->rx_db, rxr->rx_prod);
- bp->grp_info[map_idx].rx_fw_ring_id = ring->fw_ring_id;
if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) {
struct bnxt_cp_ring_info *cpr2 = rxr->rx_cpr;
+ struct bnxt_napi *bnapi = rxr->bnapi;
u32 type2 = HWRM_RING_ALLOC_CMPL;
+ struct bnxt_ring_struct *ring;
+ u32 map_idx = bnapi->index;
ring = &cpr2->cp_ring_struct;
ring->handle = BNXT_SET_NQ_HDL(cpr2);
@@ -6965,23 +7054,10 @@ static int bnxt_hwrm_ring_alloc(struct bnxt *bp)
}
if (agg_rings) {
- type = HWRM_RING_ALLOC_AGG;
for (i = 0; i < bp->rx_nr_rings; i++) {
- struct bnxt_rx_ring_info *rxr = &bp->rx_ring[i];
- struct bnxt_ring_struct *ring =
- &rxr->rx_agg_ring_struct;
- u32 grp_idx = ring->grp_idx;
- u32 map_idx = grp_idx + bp->rx_nr_rings;
-
- rc = hwrm_ring_alloc_send_msg(bp, ring, type, map_idx);
+ rc = bnxt_hwrm_rx_agg_ring_alloc(bp, &bp->rx_ring[i]);
if (rc)
goto err_out;
-
- bnxt_set_db(bp, &rxr->rx_agg_db, type, map_idx,
- ring->fw_ring_id);
- bnxt_db_write(bp, &rxr->rx_agg_db, rxr->rx_agg_prod);
- bnxt_db_write(bp, &rxr->rx_db, rxr->rx_prod);
- bp->grp_info[grp_idx].agg_fw_ring_id = ring->fw_ring_id;
}
}
err_out:
@@ -7021,6 +7097,50 @@ static int hwrm_ring_free_send_msg(struct bnxt *bp,
return 0;
}
+static void bnxt_hwrm_rx_ring_free(struct bnxt *bp,
+ struct bnxt_rx_ring_info *rxr,
+ bool close_path)
+{
+ struct bnxt_ring_struct *ring = &rxr->rx_ring_struct;
+ u32 grp_idx = rxr->bnapi->index;
+ u32 cmpl_ring_id;
+
+ if (ring->fw_ring_id == INVALID_HW_RING_ID)
+ return;
+
+ cmpl_ring_id = bnxt_cp_ring_for_rx(bp, rxr);
+ hwrm_ring_free_send_msg(bp, ring,
+ RING_FREE_REQ_RING_TYPE_RX,
+ close_path ? cmpl_ring_id :
+ INVALID_HW_RING_ID);
+ ring->fw_ring_id = INVALID_HW_RING_ID;
+ bp->grp_info[grp_idx].rx_fw_ring_id = INVALID_HW_RING_ID;
+}
+
+static void bnxt_hwrm_rx_agg_ring_free(struct bnxt *bp,
+ struct bnxt_rx_ring_info *rxr,
+ bool close_path)
+{
+ struct bnxt_ring_struct *ring = &rxr->rx_agg_ring_struct;
+ u32 grp_idx = rxr->bnapi->index;
+ u32 type, cmpl_ring_id;
+
+ if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS)
+ type = RING_FREE_REQ_RING_TYPE_RX_AGG;
+ else
+ type = RING_FREE_REQ_RING_TYPE_RX;
+
+ if (ring->fw_ring_id == INVALID_HW_RING_ID)
+ return;
+
+ cmpl_ring_id = bnxt_cp_ring_for_rx(bp, rxr);
+ hwrm_ring_free_send_msg(bp, ring, type,
+ close_path ? cmpl_ring_id :
+ INVALID_HW_RING_ID);
+ ring->fw_ring_id = INVALID_HW_RING_ID;
+ bp->grp_info[grp_idx].agg_fw_ring_id = INVALID_HW_RING_ID;
+}
+
static void bnxt_hwrm_ring_free(struct bnxt *bp, bool close_path)
{
u32 type;
@@ -7045,42 +7165,8 @@ static void bnxt_hwrm_ring_free(struct bnxt *bp, bool close_path)
}
for (i = 0; i < bp->rx_nr_rings; i++) {
- struct bnxt_rx_ring_info *rxr = &bp->rx_ring[i];
- struct bnxt_ring_struct *ring = &rxr->rx_ring_struct;
- u32 grp_idx = rxr->bnapi->index;
-
- if (ring->fw_ring_id != INVALID_HW_RING_ID) {
- u32 cmpl_ring_id = bnxt_cp_ring_for_rx(bp, rxr);
-
- hwrm_ring_free_send_msg(bp, ring,
- RING_FREE_REQ_RING_TYPE_RX,
- close_path ? cmpl_ring_id :
- INVALID_HW_RING_ID);
- ring->fw_ring_id = INVALID_HW_RING_ID;
- bp->grp_info[grp_idx].rx_fw_ring_id =
- INVALID_HW_RING_ID;
- }
- }
-
- if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS)
- type = RING_FREE_REQ_RING_TYPE_RX_AGG;
- else
- type = RING_FREE_REQ_RING_TYPE_RX;
- for (i = 0; i < bp->rx_nr_rings; i++) {
- struct bnxt_rx_ring_info *rxr = &bp->rx_ring[i];
- struct bnxt_ring_struct *ring = &rxr->rx_agg_ring_struct;
- u32 grp_idx = rxr->bnapi->index;
-
- if (ring->fw_ring_id != INVALID_HW_RING_ID) {
- u32 cmpl_ring_id = bnxt_cp_ring_for_rx(bp, rxr);
-
- hwrm_ring_free_send_msg(bp, ring, type,
- close_path ? cmpl_ring_id :
- INVALID_HW_RING_ID);
- ring->fw_ring_id = INVALID_HW_RING_ID;
- bp->grp_info[grp_idx].agg_fw_ring_id =
- INVALID_HW_RING_ID;
- }
+ bnxt_hwrm_rx_ring_free(bp, &bp->rx_ring[i], close_path);
+ bnxt_hwrm_rx_agg_ring_free(bp, &bp->rx_ring[i], close_path);
}
/* The completion rings are about to be freed. After that the
--
2.43.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH net-next v3 2/2] bnxt_en: implement netdev_queue_mgmt_ops
2024-06-19 6:29 [PATCH net-next v3 0/2] bnxt_en: implement netdev_queue_mgmt_ops David Wei
2024-06-19 6:29 ` [PATCH net-next v3 1/2] bnxt_en: split rx ring helpers out from ring helpers David Wei
@ 2024-06-19 6:29 ` David Wei
2024-06-20 16:54 ` Simon Horman
2024-06-22 0:20 ` Jakub Kicinski
2024-06-21 9:20 ` [PATCH net-next v3 0/2] " patchwork-bot+netdevbpf
2 siblings, 2 replies; 10+ messages in thread
From: David Wei @ 2024-06-19 6:29 UTC (permalink / raw)
To: Michael Chan, Andy Gospodarek, Adrian Alvarado, Somnath Kotur,
netdev
Cc: Pavel Begunkov, Jakub Kicinski, David Ahern, David S. Miller,
Eric Dumazet, Paolo Abeni
Implement netdev_queue_mgmt_ops for bnxt added in [1].
Two bnxt_rx_ring_info structs are allocated to hold the new/old queue
memory. Queue memory is copied from/to the main bp->rx_ring[idx]
bnxt_rx_ring_info.
Queue memory is pre-allocated in bnxt_queue_mem_alloc() into a clone,
and then copied into bp->rx_ring[idx] in bnxt_queue_mem_start().
Similarly, when bp->rx_ring[idx] is stopped its queue memory is copied
into a clone, and then freed later in bnxt_queue_mem_free().
I tested this patchset with netdev_rx_queue_restart(), including
inducing errors in all places that returns an error code. In all cases,
the queue is left in a good working state.
Rx queues are created/destroyed using bnxt_hwrm_rx_ring_alloc() and
bnxt_hwrm_rx_ring_free(), which issue HWRM_RING_ALLOC and HWRM_RING_FREE
commands respectively to the firmware. By the time a HWRM_RING_FREE
response is received, there won't be any more completions from that
queue.
Thanks to Somnath for helping me with this patch. With their permission
I've added them as Acked-by.
[1]: https://lore.kernel.org/netdev/20240501232549.1327174-2-shailend@google.com/
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: David Wei <dw@davidwei.uk>
---
drivers/net/ethernet/broadcom/bnxt/bnxt.c | 275 ++++++++++++++++++++++
1 file changed, 275 insertions(+)
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index 9e8d5cc32f16..259fbe709a8b 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -3997,6 +3997,62 @@ static int bnxt_alloc_cp_rings(struct bnxt *bp)
return 0;
}
+static void bnxt_init_rx_ring_struct(struct bnxt *bp,
+ struct bnxt_rx_ring_info *rxr)
+{
+ struct bnxt_ring_mem_info *rmem;
+ struct bnxt_ring_struct *ring;
+
+ ring = &rxr->rx_ring_struct;
+ rmem = &ring->ring_mem;
+ rmem->nr_pages = bp->rx_nr_pages;
+ rmem->page_size = HW_RXBD_RING_SIZE;
+ rmem->pg_arr = (void **)rxr->rx_desc_ring;
+ rmem->dma_arr = rxr->rx_desc_mapping;
+ rmem->vmem_size = SW_RXBD_RING_SIZE * bp->rx_nr_pages;
+ rmem->vmem = (void **)&rxr->rx_buf_ring;
+
+ ring = &rxr->rx_agg_ring_struct;
+ rmem = &ring->ring_mem;
+ rmem->nr_pages = bp->rx_agg_nr_pages;
+ rmem->page_size = HW_RXBD_RING_SIZE;
+ rmem->pg_arr = (void **)rxr->rx_agg_desc_ring;
+ rmem->dma_arr = rxr->rx_agg_desc_mapping;
+ rmem->vmem_size = SW_RXBD_AGG_RING_SIZE * bp->rx_agg_nr_pages;
+ rmem->vmem = (void **)&rxr->rx_agg_ring;
+}
+
+static void bnxt_reset_rx_ring_struct(struct bnxt *bp,
+ struct bnxt_rx_ring_info *rxr)
+{
+ struct bnxt_ring_mem_info *rmem;
+ struct bnxt_ring_struct *ring;
+ int i;
+
+ rxr->page_pool->p.napi = NULL;
+ rxr->page_pool = NULL;
+
+ ring = &rxr->rx_ring_struct;
+ rmem = &ring->ring_mem;
+ rmem->pg_tbl = NULL;
+ rmem->pg_tbl_map = 0;
+ for (i = 0; i < rmem->nr_pages; i++) {
+ rmem->pg_arr[i] = NULL;
+ rmem->dma_arr[i] = 0;
+ }
+ *rmem->vmem = NULL;
+
+ ring = &rxr->rx_agg_ring_struct;
+ rmem = &ring->ring_mem;
+ rmem->pg_tbl = NULL;
+ rmem->pg_tbl_map = 0;
+ for (i = 0; i < rmem->nr_pages; i++) {
+ rmem->pg_arr[i] = NULL;
+ rmem->dma_arr[i] = 0;
+ }
+ *rmem->vmem = NULL;
+}
+
static void bnxt_init_ring_struct(struct bnxt *bp)
{
int i, j;
@@ -14914,6 +14970,224 @@ static const struct netdev_stat_ops bnxt_stat_ops = {
.get_base_stats = bnxt_get_base_stats,
};
+static int bnxt_alloc_rx_agg_bmap(struct bnxt *bp, struct bnxt_rx_ring_info *rxr)
+{
+ u16 mem_size;
+
+ rxr->rx_agg_bmap_size = bp->rx_agg_ring_mask + 1;
+ mem_size = rxr->rx_agg_bmap_size / 8;
+ rxr->rx_agg_bmap = kzalloc(mem_size, GFP_KERNEL);
+ if (!rxr->rx_agg_bmap)
+ return -ENOMEM;
+
+ return 0;
+}
+
+static int bnxt_queue_mem_alloc(struct net_device *dev, void *qmem, int idx)
+{
+ struct bnxt_rx_ring_info *rxr, *clone;
+ struct bnxt *bp = netdev_priv(dev);
+ struct bnxt_ring_struct *ring;
+ int rc;
+
+ rxr = &bp->rx_ring[idx];
+ clone = qmem;
+ memcpy(clone, rxr, sizeof(*rxr));
+ bnxt_init_rx_ring_struct(bp, clone);
+ bnxt_reset_rx_ring_struct(bp, clone);
+
+ clone->rx_prod = 0;
+ clone->rx_agg_prod = 0;
+ clone->rx_sw_agg_prod = 0;
+ clone->rx_next_cons = 0;
+
+ rc = bnxt_alloc_rx_page_pool(bp, clone, rxr->page_pool->p.nid);
+ if (rc)
+ return rc;
+
+ ring = &clone->rx_ring_struct;
+ rc = bnxt_alloc_ring(bp, &ring->ring_mem);
+ if (rc)
+ goto err_free_rx_ring;
+
+ if (bp->flags & BNXT_FLAG_AGG_RINGS) {
+ ring = &clone->rx_agg_ring_struct;
+ rc = bnxt_alloc_ring(bp, &ring->ring_mem);
+ if (rc)
+ goto err_free_rx_agg_ring;
+
+ rc = bnxt_alloc_rx_agg_bmap(bp, clone);
+ if (rc)
+ goto err_free_rx_agg_ring;
+ }
+
+ bnxt_init_one_rx_ring_rxbd(bp, clone);
+ bnxt_init_one_rx_agg_ring_rxbd(bp, clone);
+
+ bnxt_alloc_one_rx_ring_skb(bp, clone, idx);
+ if (bp->flags & BNXT_FLAG_AGG_RINGS)
+ bnxt_alloc_one_rx_ring_page(bp, clone, idx);
+
+ return 0;
+
+err_free_rx_agg_ring:
+ bnxt_free_ring(bp, &clone->rx_agg_ring_struct.ring_mem);
+err_free_rx_ring:
+ bnxt_free_ring(bp, &clone->rx_ring_struct.ring_mem);
+ clone->page_pool->p.napi = NULL;
+ page_pool_destroy(clone->page_pool);
+ clone->page_pool = NULL;
+ return rc;
+}
+
+static void bnxt_queue_mem_free(struct net_device *dev, void *qmem)
+{
+ struct bnxt_rx_ring_info *rxr = qmem;
+ struct bnxt *bp = netdev_priv(dev);
+ struct bnxt_ring_struct *ring;
+
+ bnxt_free_one_rx_ring(bp, rxr);
+ bnxt_free_one_rx_agg_ring(bp, rxr);
+
+ /* At this point, this NAPI instance has another page pool associated
+ * with it. Disconnect here before freeing the old page pool to avoid
+ * warnings.
+ */
+ rxr->page_pool->p.napi = NULL;
+ page_pool_destroy(rxr->page_pool);
+ rxr->page_pool = NULL;
+
+ ring = &rxr->rx_ring_struct;
+ bnxt_free_ring(bp, &ring->ring_mem);
+
+ ring = &rxr->rx_agg_ring_struct;
+ bnxt_free_ring(bp, &ring->ring_mem);
+
+ kfree(rxr->rx_agg_bmap);
+ rxr->rx_agg_bmap = NULL;
+}
+
+static void bnxt_copy_rx_ring(struct bnxt *bp,
+ struct bnxt_rx_ring_info *dst,
+ struct bnxt_rx_ring_info *src)
+{
+ struct bnxt_ring_mem_info *dst_rmem, *src_rmem;
+ struct bnxt_ring_struct *dst_ring, *src_ring;
+ int i;
+
+ dst_ring = &dst->rx_ring_struct;
+ dst_rmem = &dst_ring->ring_mem;
+ src_ring = &src->rx_ring_struct;
+ src_rmem = &src_ring->ring_mem;
+
+ WARN_ON(dst_rmem->nr_pages != src_rmem->nr_pages);
+ WARN_ON(dst_rmem->page_size != src_rmem->page_size);
+ WARN_ON(dst_rmem->flags != src_rmem->flags);
+ WARN_ON(dst_rmem->depth != src_rmem->depth);
+ WARN_ON(dst_rmem->vmem_size != src_rmem->vmem_size);
+ WARN_ON(dst_rmem->ctx_mem != src_rmem->ctx_mem);
+
+ dst_rmem->pg_tbl = src_rmem->pg_tbl;
+ dst_rmem->pg_tbl_map = src_rmem->pg_tbl_map;
+ *dst_rmem->vmem = *src_rmem->vmem;
+ for (i = 0; i < dst_rmem->nr_pages; i++) {
+ dst_rmem->pg_arr[i] = src_rmem->pg_arr[i];
+ dst_rmem->dma_arr[i] = src_rmem->dma_arr[i];
+ }
+
+ if (!(bp->flags & BNXT_FLAG_AGG_RINGS))
+ return;
+
+ dst_ring = &dst->rx_agg_ring_struct;
+ dst_rmem = &dst_ring->ring_mem;
+ src_ring = &src->rx_agg_ring_struct;
+ src_rmem = &src_ring->ring_mem;
+
+ WARN_ON(dst_rmem->nr_pages != src_rmem->nr_pages);
+ WARN_ON(dst_rmem->page_size != src_rmem->page_size);
+ WARN_ON(dst_rmem->flags != src_rmem->flags);
+ WARN_ON(dst_rmem->depth != src_rmem->depth);
+ WARN_ON(dst_rmem->vmem_size != src_rmem->vmem_size);
+ WARN_ON(dst_rmem->ctx_mem != src_rmem->ctx_mem);
+ WARN_ON(dst->rx_agg_bmap_size != src->rx_agg_bmap_size);
+
+ dst_rmem->pg_tbl = src_rmem->pg_tbl;
+ dst_rmem->pg_tbl_map = src_rmem->pg_tbl_map;
+ *dst_rmem->vmem = *src_rmem->vmem;
+ for (i = 0; i < dst_rmem->nr_pages; i++) {
+ dst_rmem->pg_arr[i] = src_rmem->pg_arr[i];
+ dst_rmem->dma_arr[i] = src_rmem->dma_arr[i];
+ }
+
+ dst->rx_agg_bmap = src->rx_agg_bmap;
+}
+
+static int bnxt_queue_start(struct net_device *dev, void *qmem, int idx)
+{
+ struct bnxt *bp = netdev_priv(dev);
+ struct bnxt_rx_ring_info *rxr, *clone;
+ struct bnxt_cp_ring_info *cpr;
+ int rc;
+
+ rxr = &bp->rx_ring[idx];
+ clone = qmem;
+
+ rxr->rx_prod = clone->rx_prod;
+ rxr->rx_agg_prod = clone->rx_agg_prod;
+ rxr->rx_sw_agg_prod = clone->rx_sw_agg_prod;
+ rxr->rx_next_cons = clone->rx_next_cons;
+ rxr->page_pool = clone->page_pool;
+
+ bnxt_copy_rx_ring(bp, rxr, clone);
+
+ rc = bnxt_hwrm_rx_ring_alloc(bp, rxr);
+ if (rc)
+ return rc;
+ rc = bnxt_hwrm_rx_agg_ring_alloc(bp, rxr);
+ if (rc)
+ goto err_free_hwrm_rx_ring;
+
+ bnxt_db_write(bp, &rxr->rx_db, rxr->rx_prod);
+ if (bp->flags & BNXT_FLAG_AGG_RINGS)
+ bnxt_db_write(bp, &rxr->rx_agg_db, rxr->rx_agg_prod);
+
+ napi_enable(&rxr->bnapi->napi);
+
+ cpr = &rxr->bnapi->cp_ring;
+ cpr->sw_stats->rx.rx_resets++;
+
+ return 0;
+
+err_free_hwrm_rx_ring:
+ bnxt_hwrm_rx_ring_free(bp, rxr, false);
+ return rc;
+}
+
+static int bnxt_queue_stop(struct net_device *dev, void *qmem, int idx)
+{
+ struct bnxt *bp = netdev_priv(dev);
+ struct bnxt_rx_ring_info *rxr;
+
+ rxr = &bp->rx_ring[idx];
+ napi_disable(&rxr->bnapi->napi);
+ bnxt_hwrm_rx_ring_free(bp, rxr, false);
+ bnxt_hwrm_rx_agg_ring_free(bp, rxr, false);
+ rxr->rx_next_cons = 0;
+
+ memcpy(qmem, rxr, sizeof(*rxr));
+ bnxt_init_rx_ring_struct(bp, qmem);
+
+ return 0;
+}
+
+static const struct netdev_queue_mgmt_ops bnxt_queue_mgmt_ops = {
+ .ndo_queue_mem_size = sizeof(struct bnxt_rx_ring_info),
+ .ndo_queue_mem_alloc = bnxt_queue_mem_alloc,
+ .ndo_queue_mem_free = bnxt_queue_mem_free,
+ .ndo_queue_start = bnxt_queue_start,
+ .ndo_queue_stop = bnxt_queue_stop,
+};
+
static void bnxt_remove_one(struct pci_dev *pdev)
{
struct net_device *dev = pci_get_drvdata(pdev);
@@ -15379,6 +15653,7 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
dev->stat_ops = &bnxt_stat_ops;
dev->watchdog_timeo = BNXT_TX_TIMEOUT;
dev->ethtool_ops = &bnxt_ethtool_ops;
+ dev->queue_mgmt_ops = &bnxt_queue_mgmt_ops;
pci_set_drvdata(pdev, dev);
rc = bnxt_alloc_hwrm_resources(bp);
--
2.43.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH net-next v3 1/2] bnxt_en: split rx ring helpers out from ring helpers
2024-06-19 6:29 ` [PATCH net-next v3 1/2] bnxt_en: split rx ring helpers out from ring helpers David Wei
@ 2024-06-20 16:54 ` Simon Horman
0 siblings, 0 replies; 10+ messages in thread
From: Simon Horman @ 2024-06-20 16:54 UTC (permalink / raw)
To: David Wei
Cc: Michael Chan, Andy Gospodarek, Adrian Alvarado, Somnath Kotur,
netdev, Pavel Begunkov, Jakub Kicinski, David Ahern,
David S. Miller, Eric Dumazet, Paolo Abeni
On Tue, Jun 18, 2024 at 11:29:30PM -0700, David Wei wrote:
> To prepare for queue API implementation, split rx ring functions out
> from ring helpers. These new helpers will be called from queue API
> implementation.
>
> Signed-off-by: David Wei <dw@davidwei.uk>
Reviewed-by: Simon Horman <horms@kernel.org>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH net-next v3 2/2] bnxt_en: implement netdev_queue_mgmt_ops
2024-06-19 6:29 ` [PATCH net-next v3 2/2] bnxt_en: implement netdev_queue_mgmt_ops David Wei
@ 2024-06-20 16:54 ` Simon Horman
2024-06-22 0:20 ` Jakub Kicinski
1 sibling, 0 replies; 10+ messages in thread
From: Simon Horman @ 2024-06-20 16:54 UTC (permalink / raw)
To: David Wei
Cc: Michael Chan, Andy Gospodarek, Adrian Alvarado, Somnath Kotur,
netdev, Pavel Begunkov, Jakub Kicinski, David Ahern,
David S. Miller, Eric Dumazet, Paolo Abeni
On Tue, Jun 18, 2024 at 11:29:31PM -0700, David Wei wrote:
> Implement netdev_queue_mgmt_ops for bnxt added in [1].
>
> Two bnxt_rx_ring_info structs are allocated to hold the new/old queue
> memory. Queue memory is copied from/to the main bp->rx_ring[idx]
> bnxt_rx_ring_info.
>
> Queue memory is pre-allocated in bnxt_queue_mem_alloc() into a clone,
> and then copied into bp->rx_ring[idx] in bnxt_queue_mem_start().
>
> Similarly, when bp->rx_ring[idx] is stopped its queue memory is copied
> into a clone, and then freed later in bnxt_queue_mem_free().
>
> I tested this patchset with netdev_rx_queue_restart(), including
> inducing errors in all places that returns an error code. In all cases,
> the queue is left in a good working state.
>
> Rx queues are created/destroyed using bnxt_hwrm_rx_ring_alloc() and
> bnxt_hwrm_rx_ring_free(), which issue HWRM_RING_ALLOC and HWRM_RING_FREE
> commands respectively to the firmware. By the time a HWRM_RING_FREE
> response is received, there won't be any more completions from that
> queue.
>
> Thanks to Somnath for helping me with this patch. With their permission
> I've added them as Acked-by.
>
> [1]: https://lore.kernel.org/netdev/20240501232549.1327174-2-shailend@google.com/
>
> Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
> Signed-off-by: David Wei <dw@davidwei.uk>
Reviewed-by: Simon Horman <horms@kernel.org>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH net-next v3 0/2] bnxt_en: implement netdev_queue_mgmt_ops
2024-06-19 6:29 [PATCH net-next v3 0/2] bnxt_en: implement netdev_queue_mgmt_ops David Wei
2024-06-19 6:29 ` [PATCH net-next v3 1/2] bnxt_en: split rx ring helpers out from ring helpers David Wei
2024-06-19 6:29 ` [PATCH net-next v3 2/2] bnxt_en: implement netdev_queue_mgmt_ops David Wei
@ 2024-06-21 9:20 ` patchwork-bot+netdevbpf
2 siblings, 0 replies; 10+ messages in thread
From: patchwork-bot+netdevbpf @ 2024-06-21 9:20 UTC (permalink / raw)
To: David Wei
Cc: michael.chan, andrew.gospodarek, adrian.alvarado, somnath.kotur,
netdev, asml.silence, kuba, dsahern, davem, edumazet, pabeni
Hello:
This series was applied to netdev/net-next.git (main)
by David S. Miller <davem@davemloft.net>:
On Tue, 18 Jun 2024 23:29:29 -0700 you wrote:
> Implement netdev_queue_mgmt_ops for bnxt added in [1]. This will be used
> in the io_uring ZC Rx patchset to configure queues with a custom page
> pool w/ a special memory provider for zero copy support.
>
> The first two patches prep the driver, while the final patch adds the
> implementation.
>
> [...]
Here is the summary with links:
- [net-next,v3,1/2] bnxt_en: split rx ring helpers out from ring helpers
https://git.kernel.org/netdev/net-next/c/88f56254a275
- [net-next,v3,2/2] bnxt_en: implement netdev_queue_mgmt_ops
https://git.kernel.org/netdev/net-next/c/2d694c27d32e
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH net-next v3 2/2] bnxt_en: implement netdev_queue_mgmt_ops
2024-06-19 6:29 ` [PATCH net-next v3 2/2] bnxt_en: implement netdev_queue_mgmt_ops David Wei
2024-06-20 16:54 ` Simon Horman
@ 2024-06-22 0:20 ` Jakub Kicinski
2024-06-24 18:20 ` David Wei
1 sibling, 1 reply; 10+ messages in thread
From: Jakub Kicinski @ 2024-06-22 0:20 UTC (permalink / raw)
To: David Wei
Cc: Michael Chan, Andy Gospodarek, Adrian Alvarado, Somnath Kotur,
netdev, Pavel Begunkov, David Ahern, David S. Miller,
Eric Dumazet, Paolo Abeni
On Tue, 18 Jun 2024 23:29:31 -0700 David Wei wrote:
> + /* At this point, this NAPI instance has another page pool associated
> + * with it. Disconnect here before freeing the old page pool to avoid
> + * warnings.
> + */
> + rxr->page_pool->p.napi = NULL;
> + page_pool_destroy(rxr->page_pool);
> + rxr->page_pool = NULL;
What's the warning you hit?
We should probably bring back page_pool_unlink_napi(),
if this is really needed.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH net-next v3 2/2] bnxt_en: implement netdev_queue_mgmt_ops
2024-06-22 0:20 ` Jakub Kicinski
@ 2024-06-24 18:20 ` David Wei
2024-06-24 22:02 ` Jakub Kicinski
0 siblings, 1 reply; 10+ messages in thread
From: David Wei @ 2024-06-24 18:20 UTC (permalink / raw)
To: Jakub Kicinski
Cc: Michael Chan, Andy Gospodarek, Adrian Alvarado, Somnath Kotur,
netdev, Pavel Begunkov, David Ahern, David S. Miller,
Eric Dumazet, Paolo Abeni
On 2024-06-21 17:20, Jakub Kicinski wrote:
> On Tue, 18 Jun 2024 23:29:31 -0700 David Wei wrote:
>> + /* At this point, this NAPI instance has another page pool associated
>> + * with it. Disconnect here before freeing the old page pool to avoid
>> + * warnings.
>> + */
>> + rxr->page_pool->p.napi = NULL;
>> + page_pool_destroy(rxr->page_pool);
>> + rxr->page_pool = NULL;
>
> What's the warning you hit?
> We should probably bring back page_pool_unlink_napi(),
> if this is really needed.
This one:
https://elixir.bootlin.com/linux/v6.10-rc5/source/net/core/page_pool.c#L1030
The cause is having two different bnxt_rx_ring_info referring to the
same NAPI instance. One is the proper one in bp->rx_ring, the other is
the temporarily allocated one for holding the "replacement" during the
reset.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH net-next v3 2/2] bnxt_en: implement netdev_queue_mgmt_ops
2024-06-24 18:20 ` David Wei
@ 2024-06-24 22:02 ` Jakub Kicinski
2024-06-24 22:50 ` David Wei
0 siblings, 1 reply; 10+ messages in thread
From: Jakub Kicinski @ 2024-06-24 22:02 UTC (permalink / raw)
To: David Wei
Cc: Michael Chan, Andy Gospodarek, Adrian Alvarado, Somnath Kotur,
netdev, Pavel Begunkov, David Ahern, David S. Miller,
Eric Dumazet, Paolo Abeni
On Mon, 24 Jun 2024 11:20:59 -0700 David Wei wrote:
> > What's the warning you hit?
> > We should probably bring back page_pool_unlink_napi(),
> > if this is really needed.
>
> This one:
>
> https://elixir.bootlin.com/linux/v6.10-rc5/source/net/core/page_pool.c#L1030
>
> The cause is having two different bnxt_rx_ring_info referring to the
> same NAPI instance. One is the proper one in bp->rx_ring, the other is
> the temporarily allocated one for holding the "replacement" during the
> reset.
Makes sense, as I said please look thru the history - some form of
page_pool_unlink_napi() used to be exported for this use case, but
Olek(?) deleted it due to lack of in-tree users.
With that helper in place you can unlink the page pool while the NAPI
is stopped, without poking into internals at the driver level.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH net-next v3 2/2] bnxt_en: implement netdev_queue_mgmt_ops
2024-06-24 22:02 ` Jakub Kicinski
@ 2024-06-24 22:50 ` David Wei
0 siblings, 0 replies; 10+ messages in thread
From: David Wei @ 2024-06-24 22:50 UTC (permalink / raw)
To: Jakub Kicinski
Cc: Michael Chan, Andy Gospodarek, Adrian Alvarado, Somnath Kotur,
netdev, Pavel Begunkov, David Ahern, David S. Miller,
Eric Dumazet, Paolo Abeni
On 2024-06-24 15:02, Jakub Kicinski wrote:
> On Mon, 24 Jun 2024 11:20:59 -0700 David Wei wrote:
>>> What's the warning you hit?
>>> We should probably bring back page_pool_unlink_napi(),
>>> if this is really needed.
>>
>> This one:
>>
>> https://elixir.bootlin.com/linux/v6.10-rc5/source/net/core/page_pool.c#L1030
>>
>> The cause is having two different bnxt_rx_ring_info referring to the
>> same NAPI instance. One is the proper one in bp->rx_ring, the other is
>> the temporarily allocated one for holding the "replacement" during the
>> reset.
>
> Makes sense, as I said please look thru the history - some form of
> page_pool_unlink_napi() used to be exported for this use case, but
> Olek(?) deleted it due to lack of in-tree users.
>
> With that helper in place you can unlink the page pool while the NAPI
> is stopped, without poking into internals at the driver level.
You got it. I'll send as a follow up.
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2024-06-24 22:50 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-06-19 6:29 [PATCH net-next v3 0/2] bnxt_en: implement netdev_queue_mgmt_ops David Wei
2024-06-19 6:29 ` [PATCH net-next v3 1/2] bnxt_en: split rx ring helpers out from ring helpers David Wei
2024-06-20 16:54 ` Simon Horman
2024-06-19 6:29 ` [PATCH net-next v3 2/2] bnxt_en: implement netdev_queue_mgmt_ops David Wei
2024-06-20 16:54 ` Simon Horman
2024-06-22 0:20 ` Jakub Kicinski
2024-06-24 18:20 ` David Wei
2024-06-24 22:02 ` Jakub Kicinski
2024-06-24 22:50 ` David Wei
2024-06-21 9:20 ` [PATCH net-next v3 0/2] " patchwork-bot+netdevbpf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).