* [PATCH net-next 0/9][pull request] Intel Wired LAN Driver Updates 2026-02-06 (libeth, ice, i40e, ixgbe)
@ 2026-02-06 17:48 Tony Nguyen
2026-02-06 17:48 ` [PATCH net-next 1/9] libeth: pass Rx queue index to PP when creating a fill queue Tony Nguyen
` (8 more replies)
0 siblings, 9 replies; 15+ messages in thread
From: Tony Nguyen @ 2026-02-06 17:48 UTC (permalink / raw)
To: davem, kuba, pabeni, edumazet, andrew+netdev, netdev; +Cc: Tony Nguyen
For libeth/ice:
Alexander adds support for devmem/io_uring Rx and Tx.
Quoting Alexander:
Now that ice uses libeth for managing Rx buffers and supports
configurable header split, it's ready to get support for sending
and receiving packets with unreadable (to the kernel) frags.
Extend libeth just a little bit to allow creating PPs with custom
memory providers and make sure ice works correctly with the netdev
ops locking. Then add the full set of queue_mgmt_ops and don't
unmap unreadable frags on Tx completion.
No perf regressions for the regular flows and no code duplication
implied.
Credits to the fbnic developers, which's code helped me understand
the memory providers and queue_mgmt_ops logics and served as
a reference.
For ice:
Simon Horman adds const modifier to read only member of a struct.
For i40e:
Yury Norov removes an unneeded check of bitmap_weight().
Andy Shevchenko adds a missing include.
For ixgbe:
Aleksandr changes declaration of a bitmap to utilize DECLARE_BITMAP()
macro.
The following are changes since commit 24cf78c738318f3d2b961a1ab4b3faf1eca860d7:
net/mlx5e: SHAMPO, Switch to header memcpy
and are available in the git repository at:
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue 100GbE
Aleksandr Loktionov (1):
ixgbe: refactor: use DECLARE_BITMAP for ring state field
Alexander Lobakin (5):
libeth: pass Rx queue index to PP when creating a fill queue
libeth: handle creating pools with unreadable buffers
ice: migrate to netdev ops lock
ice: implement Rx queue management ops
ice: add support for transmitting unreadable frags
Andy Shevchenko (1):
i40e: Add missing header
Simon Horman (1):
ice: Make name member of struct ice_cgu_pin_desc const
Yury Norov (NVIDIA) (1):
i40e: drop useless bitmap_weight() call in i40e_set_rxfh_fields()
.../net/ethernet/intel/i40e/i40e_ethtool.c | 21 +-
drivers/net/ethernet/intel/i40e/i40e_hmc.h | 2 +
drivers/net/ethernet/intel/iavf/iavf_txrx.c | 1 +
drivers/net/ethernet/intel/ice/ice_base.c | 259 +++++++++++++-----
drivers/net/ethernet/intel/ice/ice_base.h | 2 +
drivers/net/ethernet/intel/ice/ice_ethtool.c | 1 +
drivers/net/ethernet/intel/ice/ice_lib.c | 150 ++++++++--
drivers/net/ethernet/intel/ice/ice_lib.h | 12 +-
drivers/net/ethernet/intel/ice/ice_main.c | 55 ++--
drivers/net/ethernet/intel/ice/ice_ptp_hw.h | 2 +-
drivers/net/ethernet/intel/ice/ice_sf_eth.c | 2 +
drivers/net/ethernet/intel/ice/ice_txrx.c | 43 ++-
drivers/net/ethernet/intel/ice/ice_txrx.h | 2 +
drivers/net/ethernet/intel/ice/ice_xsk.c | 4 +-
drivers/net/ethernet/intel/idpf/idpf_txrx.c | 13 +
drivers/net/ethernet/intel/idpf/idpf_txrx.h | 2 +
drivers/net/ethernet/intel/ixgbe/ixgbe.h | 27 +-
drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c | 4 +-
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 56 ++--
drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c | 2 +-
drivers/net/ethernet/intel/libeth/rx.c | 46 ++++
include/net/libeth/rx.h | 2 +
include/net/libeth/tx.h | 2 +-
23 files changed, 527 insertions(+), 183 deletions(-)
--
2.47.1
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH net-next 1/9] libeth: pass Rx queue index to PP when creating a fill queue
2026-02-06 17:48 [PATCH net-next 0/9][pull request] Intel Wired LAN Driver Updates 2026-02-06 (libeth, ice, i40e, ixgbe) Tony Nguyen
@ 2026-02-06 17:48 ` Tony Nguyen
2026-02-06 17:48 ` [PATCH net-next 2/9] libeth: handle creating pools with unreadable buffers Tony Nguyen
` (7 subsequent siblings)
8 siblings, 0 replies; 15+ messages in thread
From: Tony Nguyen @ 2026-02-06 17:48 UTC (permalink / raw)
To: davem, kuba, pabeni, edumazet, andrew+netdev, netdev
Cc: Alexander Lobakin, anthony.l.nguyen, jacob.e.keller,
nxne.cnse.osdt.itp.upstreaming, horms, maciej.fijalkowski,
magnus.karlsson, ast, daniel, hawk, john.fastabend, sdf, bpf,
Aleksandr Loktionov, Alexander Nowlin
From: Alexander Lobakin <aleksander.lobakin@intel.com>
Since recently, page_pool_create() accepts optional stack index of
the Rx queue which the pool will be created for. It can then be
used on control path for stuff like memory providers.
Add the same field to libeth_fq and pass the index from all the
drivers using libeth for managing Rx to simplify implementing MP
support later.
idpf has one libeth_fq per buffer/fill queue and each Rx queue has
two fill queues, but since fill queues can never be shared, we can
store the corresponding Rx queue index there during the
initialization to pass it to libeth.
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com>
Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Tested-by: Alexander Nowlin <alexander.nowlin@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
---
drivers/net/ethernet/intel/iavf/iavf_txrx.c | 1 +
drivers/net/ethernet/intel/ice/ice_base.c | 2 ++
drivers/net/ethernet/intel/idpf/idpf_txrx.c | 13 +++++++++++++
drivers/net/ethernet/intel/idpf/idpf_txrx.h | 2 ++
drivers/net/ethernet/intel/libeth/rx.c | 1 +
include/net/libeth/rx.h | 2 ++
6 files changed, 21 insertions(+)
diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c
index 363c42bf3dcf..d3c68659162b 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c
@@ -771,6 +771,7 @@ int iavf_setup_rx_descriptors(struct iavf_ring *rx_ring)
.count = rx_ring->count,
.buf_len = LIBIE_MAX_RX_BUF_LEN,
.nid = NUMA_NO_NODE,
+ .idx = rx_ring->queue_index,
};
int ret;
diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c
index afbff8aa9ceb..1b7d10fad4f2 100644
--- a/drivers/net/ethernet/intel/ice/ice_base.c
+++ b/drivers/net/ethernet/intel/ice/ice_base.c
@@ -607,6 +607,7 @@ static int ice_rxq_pp_create(struct ice_rx_ring *rq)
struct libeth_fq fq = {
.count = rq->count,
.nid = NUMA_NO_NODE,
+ .idx = rq->q_index,
.hsplit = rq->vsi->hsplit,
.xdp = ice_is_xdp_ena_vsi(rq->vsi),
.buf_len = LIBIE_MAX_RX_BUF_LEN,
@@ -629,6 +630,7 @@ static int ice_rxq_pp_create(struct ice_rx_ring *rq)
.count = rq->count,
.type = LIBETH_FQE_HDR,
.nid = NUMA_NO_NODE,
+ .idx = rq->q_index,
.xdp = ice_is_xdp_ena_vsi(rq->vsi),
};
diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
index 376050308b06..36e2050dbb04 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
@@ -558,6 +558,7 @@ static int idpf_rx_hdr_buf_alloc_all(struct idpf_buf_queue *bufq)
.type = LIBETH_FQE_HDR,
.xdp = idpf_xdp_enabled(bufq->q_vector->vport),
.nid = idpf_q_vector_to_mem(bufq->q_vector),
+ .idx = bufq->rxq_idx,
};
int ret;
@@ -700,6 +701,7 @@ static int idpf_rx_bufs_init_singleq(struct idpf_rx_queue *rxq)
.type = LIBETH_FQE_MTU,
.buf_len = IDPF_RX_MAX_BUF_SZ,
.nid = idpf_q_vector_to_mem(rxq->q_vector),
+ .idx = rxq->idx,
};
int ret;
@@ -760,6 +762,7 @@ static int idpf_rx_bufs_init(struct idpf_buf_queue *bufq,
.hsplit = idpf_queue_has(HSPLIT_EN, bufq),
.xdp = idpf_xdp_enabled(bufq->q_vector->vport),
.nid = idpf_q_vector_to_mem(bufq->q_vector),
+ .idx = bufq->rxq_idx,
};
int ret;
@@ -1919,6 +1922,16 @@ static int idpf_rxq_group_alloc(struct idpf_vport *vport,
LIBETH_RX_LL_LEN;
idpf_rxq_set_descids(rsrc, q);
}
+
+ if (!idpf_is_queue_model_split(rsrc->rxq_model))
+ continue;
+
+ for (u32 j = 0; j < rsrc->num_bufqs_per_qgrp; j++) {
+ struct idpf_buf_queue *bufq;
+
+ bufq = &rx_qgrp->splitq.bufq_sets[j].bufq;
+ bufq->rxq_idx = rx_qgrp->splitq.rxq_sets[0]->rxq.idx;
+ }
}
err_alloc:
diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
index 4be5b3b6d3ed..a0d92adf11c4 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h
+++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
@@ -748,6 +748,7 @@ libeth_cacheline_set_assert(struct idpf_tx_queue, 64,
* @size: Length of descriptor ring in bytes
* @dma: Physical address of ring
* @q_vector: Backreference to associated vector
+ * @rxq_idx: stack index of the corresponding Rx queue
* @rx_buffer_low_watermark: RX buffer low watermark
* @rx_hbuf_size: Header buffer size
* @rx_buf_size: Buffer size
@@ -791,6 +792,7 @@ struct idpf_buf_queue {
dma_addr_t dma;
struct idpf_q_vector *q_vector;
+ u16 rxq_idx;
u16 rx_buffer_low_watermark;
u16 rx_hbuf_size;
diff --git a/drivers/net/ethernet/intel/libeth/rx.c b/drivers/net/ethernet/intel/libeth/rx.c
index 62521a1f4ec9..8874b714cdcc 100644
--- a/drivers/net/ethernet/intel/libeth/rx.c
+++ b/drivers/net/ethernet/intel/libeth/rx.c
@@ -156,6 +156,7 @@ int libeth_rx_fq_create(struct libeth_fq *fq, struct napi_struct *napi)
.order = LIBETH_RX_PAGE_ORDER,
.pool_size = fq->count,
.nid = fq->nid,
+ .queue_idx = fq->idx,
.dev = napi->dev->dev.parent,
.netdev = napi->dev,
.napi = napi,
diff --git a/include/net/libeth/rx.h b/include/net/libeth/rx.h
index 5d991404845e..3b3d7acd13c9 100644
--- a/include/net/libeth/rx.h
+++ b/include/net/libeth/rx.h
@@ -71,6 +71,7 @@ enum libeth_fqe_type {
* @xdp: flag indicating whether XDP is enabled
* @buf_len: HW-writeable length per each buffer
* @nid: ID of the closest NUMA node with memory
+ * @idx: stack index of the corresponding Rx queue
*/
struct libeth_fq {
struct_group_tagged(libeth_fq_fp, fp,
@@ -88,6 +89,7 @@ struct libeth_fq {
u32 buf_len;
int nid;
+ u32 idx;
};
int libeth_rx_fq_create(struct libeth_fq *fq, struct napi_struct *napi);
--
2.47.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH net-next 2/9] libeth: handle creating pools with unreadable buffers
2026-02-06 17:48 [PATCH net-next 0/9][pull request] Intel Wired LAN Driver Updates 2026-02-06 (libeth, ice, i40e, ixgbe) Tony Nguyen
2026-02-06 17:48 ` [PATCH net-next 1/9] libeth: pass Rx queue index to PP when creating a fill queue Tony Nguyen
@ 2026-02-06 17:48 ` Tony Nguyen
2026-02-06 17:49 ` [PATCH net-next 3/9] ice: migrate to netdev ops lock Tony Nguyen
` (6 subsequent siblings)
8 siblings, 0 replies; 15+ messages in thread
From: Tony Nguyen @ 2026-02-06 17:48 UTC (permalink / raw)
To: davem, kuba, pabeni, edumazet, andrew+netdev, netdev
Cc: Alexander Lobakin, anthony.l.nguyen, jacob.e.keller,
nxne.cnse.osdt.itp.upstreaming, horms, maciej.fijalkowski,
magnus.karlsson, ast, daniel, hawk, john.fastabend, sdf, bpf,
Aleksandr Loktionov, Alexander Nowlin
From: Alexander Lobakin <aleksander.lobakin@intel.com>
libeth uses netmems for quite some time already, so in order to
support unreadable frags / memory providers, it only needs to set
PP_FLAG_ALLOW_UNREADABLE_NETMEM when needed.
Also add a couple sanity checks to make sure the driver didn't mess
up the configuration options and, in case when an MP is installed,
return the truesize always equal to PAGE_SIZE, so that
libeth_rx_alloc() will never try to allocate frags. Memory providers
manage buffers on their own and expect 1:1 buffer / HW Rx descriptor
association.
Bonus: mention in the libeth_sqe_type description that
LIBETH_SQE_EMPTY should also be used for netmem Tx SQEs -- they
don't need DMA unmapping.
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com>
Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Tested-by: Alexander Nowlin <alexander.nowlin@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
---
drivers/net/ethernet/intel/libeth/rx.c | 45 ++++++++++++++++++++++++++
include/net/libeth/tx.h | 2 +-
2 files changed, 46 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/intel/libeth/rx.c b/drivers/net/ethernet/intel/libeth/rx.c
index 8874b714cdcc..7492af5f764c 100644
--- a/drivers/net/ethernet/intel/libeth/rx.c
+++ b/drivers/net/ethernet/intel/libeth/rx.c
@@ -6,6 +6,7 @@
#include <linux/export.h>
#include <net/libeth/rx.h>
+#include <net/netdev_queues.h>
/* Rx buffer management */
@@ -139,9 +140,50 @@ static bool libeth_rx_page_pool_params_zc(struct libeth_fq *fq,
fq->buf_len = clamp(mtu, LIBETH_RX_BUF_STRIDE, max);
fq->truesize = fq->buf_len;
+ /*
+ * Allow frags only for kernel pages. `fq->truesize == pp->max_len`
+ * will always fall back to regular page_pool_alloc_netmems()
+ * regardless of the MTU / FQ buffer size.
+ */
+ if (pp->flags & PP_FLAG_ALLOW_UNREADABLE_NETMEM)
+ fq->truesize = pp->max_len;
+
return true;
}
+/**
+ * libeth_rx_page_pool_check_unread - check input params for unreadable MPs
+ * @fq: buffer queue to check
+ * @pp: &page_pool_params for the queue
+ *
+ * Make sure we don't create an invalid pool with full-frame unreadable
+ * buffers, bidirectional unreadable buffers or so, and configure the
+ * ZC payload pool accordingly.
+ *
+ * Return: true on success, false on invalid input params.
+ */
+static bool libeth_rx_page_pool_check_unread(const struct libeth_fq *fq,
+ struct page_pool_params *pp)
+{
+ if (!pp->netdev)
+ return true;
+
+ if (!netif_rxq_has_unreadable_mp(pp->netdev, pp->queue_idx))
+ return true;
+
+ /* For now, the core stack doesn't allow XDP with unreadable frags */
+ if (fq->xdp)
+ return false;
+
+ /* It should be either a header pool or a ZC payload pool */
+ if (fq->type == LIBETH_FQE_HDR)
+ return !fq->hsplit;
+
+ pp->flags |= PP_FLAG_ALLOW_UNREADABLE_NETMEM;
+
+ return fq->hsplit;
+}
+
/**
* libeth_rx_fq_create - create a PP with the default libeth settings
* @fq: buffer queue struct to fill
@@ -165,6 +207,9 @@ int libeth_rx_fq_create(struct libeth_fq *fq, struct napi_struct *napi)
struct page_pool *pool;
int ret;
+ if (!libeth_rx_page_pool_check_unread(fq, &pp))
+ return -EINVAL;
+
pp.dma_dir = fq->xdp ? DMA_BIDIRECTIONAL : DMA_FROM_DEVICE;
if (!fq->hsplit)
diff --git a/include/net/libeth/tx.h b/include/net/libeth/tx.h
index c3db5c6f1641..a66fc2b3a114 100644
--- a/include/net/libeth/tx.h
+++ b/include/net/libeth/tx.h
@@ -12,7 +12,7 @@
/**
* enum libeth_sqe_type - type of &libeth_sqe to act on Tx completion
- * @LIBETH_SQE_EMPTY: unused/empty OR XDP_TX/XSk frame, no action required
+ * @LIBETH_SQE_EMPTY: empty OR netmem/XDP_TX/XSk frame, no action required
* @LIBETH_SQE_CTX: context descriptor with empty SQE, no action required
* @LIBETH_SQE_SLAB: kmalloc-allocated buffer, unmap and kfree()
* @LIBETH_SQE_FRAG: mapped skb frag, only unmap DMA
--
2.47.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH net-next 3/9] ice: migrate to netdev ops lock
2026-02-06 17:48 [PATCH net-next 0/9][pull request] Intel Wired LAN Driver Updates 2026-02-06 (libeth, ice, i40e, ixgbe) Tony Nguyen
2026-02-06 17:48 ` [PATCH net-next 1/9] libeth: pass Rx queue index to PP when creating a fill queue Tony Nguyen
2026-02-06 17:48 ` [PATCH net-next 2/9] libeth: handle creating pools with unreadable buffers Tony Nguyen
@ 2026-02-06 17:49 ` Tony Nguyen
2026-02-11 4:24 ` [net-next,3/9] " Jakub Kicinski
2026-02-06 17:49 ` [PATCH net-next 4/9] ice: implement Rx queue management ops Tony Nguyen
` (5 subsequent siblings)
8 siblings, 1 reply; 15+ messages in thread
From: Tony Nguyen @ 2026-02-06 17:49 UTC (permalink / raw)
To: davem, kuba, pabeni, edumazet, andrew+netdev, netdev
Cc: Alexander Lobakin, anthony.l.nguyen, jacob.e.keller,
nxne.cnse.osdt.itp.upstreaming, horms, maciej.fijalkowski,
magnus.karlsson, ast, daniel, hawk, john.fastabend, sdf, bpf,
Aleksandr Loktionov, Alexander Nowlin
From: Alexander Lobakin <aleksander.lobakin@intel.com>
Queue management ops unconditionally enable netdev locking. The same
lock is taken by default by several NAPI configuration functions,
such as napi_enable() and netif_napi_set_irq().
Request ops locking in advance and make sure we use the _locked
counterparts of those functions to avoid deadlocks, taking the lock
manually where needed (suspend/resume, queue rebuild and resets).
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com>
Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Tested-by: Alexander Nowlin <alexander.nowlin@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
---
drivers/net/ethernet/intel/ice/ice_base.c | 63 ++++++--
drivers/net/ethernet/intel/ice/ice_base.h | 2 +
drivers/net/ethernet/intel/ice/ice_lib.c | 150 +++++++++++++++++---
drivers/net/ethernet/intel/ice/ice_lib.h | 7 +-
drivers/net/ethernet/intel/ice/ice_main.c | 54 ++++---
drivers/net/ethernet/intel/ice/ice_sf_eth.c | 1 +
drivers/net/ethernet/intel/ice/ice_xsk.c | 4 +-
7 files changed, 225 insertions(+), 56 deletions(-)
diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c
index 1b7d10fad4f2..f084b1c48e5d 100644
--- a/drivers/net/ethernet/intel/ice/ice_base.c
+++ b/drivers/net/ethernet/intel/ice/ice_base.c
@@ -153,8 +153,8 @@ static int ice_vsi_alloc_q_vector(struct ice_vsi *vsi, u16 v_idx)
* handler here (i.e. resume, reset/rebuild, etc.)
*/
if (vsi->netdev)
- netif_napi_add_config(vsi->netdev, &q_vector->napi,
- ice_napi_poll, v_idx);
+ netif_napi_add_config_locked(vsi->netdev, &q_vector->napi,
+ ice_napi_poll, v_idx);
out:
/* tie q_vector and VSI together */
@@ -196,7 +196,7 @@ static void ice_free_q_vector(struct ice_vsi *vsi, int v_idx)
/* only VSI with an associated netdev is set up with NAPI */
if (vsi->netdev)
- netif_napi_del(&q_vector->napi);
+ netif_napi_del_locked(&q_vector->napi);
/* release MSIX interrupt if q_vector had interrupt allocated */
if (q_vector->irq.index < 0)
@@ -896,13 +896,15 @@ int ice_vsi_wait_one_rx_ring(struct ice_vsi *vsi, bool ena, u16 rxq_idx)
}
/**
- * ice_vsi_alloc_q_vectors - Allocate memory for interrupt vectors
+ * ice_vsi_alloc_q_vectors_locked - Allocate memory for interrupt vectors
* @vsi: the VSI being configured
*
- * We allocate one q_vector per queue interrupt. If allocation fails we
- * return -ENOMEM.
+ * Should be called only under the netdev lock.
+ * We allocate one q_vector per queue interrupt.
+ *
+ * Return: 0 on success, -ENOMEM if allocation fails.
*/
-int ice_vsi_alloc_q_vectors(struct ice_vsi *vsi)
+int ice_vsi_alloc_q_vectors_locked(struct ice_vsi *vsi)
{
struct device *dev = ice_pf_to_dev(vsi->back);
u16 v_idx;
@@ -929,6 +931,30 @@ int ice_vsi_alloc_q_vectors(struct ice_vsi *vsi)
return v_idx ? 0 : err;
}
+/**
+ * ice_vsi_alloc_q_vectors - Allocate memory for interrupt vectors
+ * @vsi: the VSI being configured
+ *
+ * We allocate one q_vector per queue interrupt.
+ *
+ * Return: 0 on success, -ENOMEM if allocation fails.
+ */
+int ice_vsi_alloc_q_vectors(struct ice_vsi *vsi)
+{
+ struct net_device *dev = vsi->netdev;
+ int ret;
+
+ if (dev)
+ netdev_lock(dev);
+
+ ret = ice_vsi_alloc_q_vectors_locked(vsi);
+
+ if (dev)
+ netdev_unlock(dev);
+
+ return ret;
+}
+
/**
* ice_vsi_map_rings_to_vectors - Map VSI rings to interrupt vectors
* @vsi: the VSI being configured
@@ -992,10 +1018,12 @@ void ice_vsi_map_rings_to_vectors(struct ice_vsi *vsi)
}
/**
- * ice_vsi_free_q_vectors - Free memory allocated for interrupt vectors
+ * ice_vsi_free_q_vectors_locked - Free memory allocated for interrupt vectors
* @vsi: the VSI having memory freed
+ *
+ * Should be called only under the netdev lock.
*/
-void ice_vsi_free_q_vectors(struct ice_vsi *vsi)
+void ice_vsi_free_q_vectors_locked(struct ice_vsi *vsi)
{
int v_idx;
@@ -1005,6 +1033,23 @@ void ice_vsi_free_q_vectors(struct ice_vsi *vsi)
vsi->num_q_vectors = 0;
}
+/**
+ * ice_vsi_free_q_vectors - Free memory allocated for interrupt vectors
+ * @vsi: the VSI having memory freed
+ */
+void ice_vsi_free_q_vectors(struct ice_vsi *vsi)
+{
+ struct net_device *dev = vsi->netdev;
+
+ if (dev)
+ netdev_lock(dev);
+
+ ice_vsi_free_q_vectors_locked(vsi);
+
+ if (dev)
+ netdev_unlock(dev);
+}
+
/**
* ice_cfg_tstamp - Configure Tx time stamp queue
* @tx_ring: Tx ring to be configured with timestamping
diff --git a/drivers/net/ethernet/intel/ice/ice_base.h b/drivers/net/ethernet/intel/ice/ice_base.h
index d28294247599..99b2c7232829 100644
--- a/drivers/net/ethernet/intel/ice/ice_base.h
+++ b/drivers/net/ethernet/intel/ice/ice_base.h
@@ -12,8 +12,10 @@ int __ice_vsi_get_qs(struct ice_qs_cfg *qs_cfg);
int
ice_vsi_ctrl_one_rx_ring(struct ice_vsi *vsi, bool ena, u16 rxq_idx, bool wait);
int ice_vsi_wait_one_rx_ring(struct ice_vsi *vsi, bool ena, u16 rxq_idx);
+int ice_vsi_alloc_q_vectors_locked(struct ice_vsi *vsi);
int ice_vsi_alloc_q_vectors(struct ice_vsi *vsi);
void ice_vsi_map_rings_to_vectors(struct ice_vsi *vsi);
+void ice_vsi_free_q_vectors_locked(struct ice_vsi *vsi);
void ice_vsi_free_q_vectors(struct ice_vsi *vsi);
int ice_vsi_cfg_single_txq(struct ice_vsi *vsi, struct ice_tx_ring **tx_rings,
u16 q_idx);
diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
index d921269e1fe7..98009de08bfb 100644
--- a/drivers/net/ethernet/intel/ice/ice_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_lib.c
@@ -2308,10 +2308,14 @@ static int ice_vsi_cfg_tc_lan(struct ice_pf *pf, struct ice_vsi *vsi)
}
/**
- * ice_vsi_cfg_def - configure default VSI based on the type
+ * ice_vsi_cfg_def_locked - configure default VSI based on the type
* @vsi: pointer to VSI
+ *
+ * Should be called only with the netdev lock taken.
+ *
+ * Return: 0 on success, -errno on failure.
*/
-static int ice_vsi_cfg_def(struct ice_vsi *vsi)
+static int ice_vsi_cfg_def_locked(struct ice_vsi *vsi)
{
struct device *dev = ice_pf_to_dev(vsi->back);
struct ice_pf *pf = vsi->back;
@@ -2354,7 +2358,7 @@ static int ice_vsi_cfg_def(struct ice_vsi *vsi)
case ICE_VSI_CTRL:
case ICE_VSI_SF:
case ICE_VSI_PF:
- ret = ice_vsi_alloc_q_vectors(vsi);
+ ret = ice_vsi_alloc_q_vectors_locked(vsi);
if (ret)
goto unroll_vsi_init;
@@ -2404,7 +2408,7 @@ static int ice_vsi_cfg_def(struct ice_vsi *vsi)
* creates a VSI and corresponding structures for bookkeeping
* purpose
*/
- ret = ice_vsi_alloc_q_vectors(vsi);
+ ret = ice_vsi_alloc_q_vectors_locked(vsi);
if (ret)
goto unroll_vsi_init;
@@ -2460,6 +2464,28 @@ static int ice_vsi_cfg_def(struct ice_vsi *vsi)
return ret;
}
+/**
+ * ice_vsi_cfg_def - configure default VSI based on the type
+ * @vsi: pointer to VSI
+ *
+ * Return: 0 on success, -errno on failure.
+ */
+static int ice_vsi_cfg_def(struct ice_vsi *vsi)
+{
+ struct net_device *dev = vsi->netdev;
+ int ret;
+
+ if (dev)
+ netdev_lock(dev);
+
+ ret = ice_vsi_cfg_def_locked(vsi);
+
+ if (dev)
+ netdev_unlock(dev);
+
+ return ret;
+}
+
/**
* ice_vsi_cfg - configure a previously allocated VSI
* @vsi: pointer to VSI
@@ -2494,10 +2520,12 @@ int ice_vsi_cfg(struct ice_vsi *vsi)
}
/**
- * ice_vsi_decfg - remove all VSI configuration
+ * ice_vsi_decfg_locked - remove all VSI configuration
* @vsi: pointer to VSI
+ *
+ * Should be called only under the netdev lock.
*/
-void ice_vsi_decfg(struct ice_vsi *vsi)
+static void ice_vsi_decfg_locked(struct ice_vsi *vsi)
{
struct ice_pf *pf = vsi->back;
int err;
@@ -2515,7 +2543,7 @@ void ice_vsi_decfg(struct ice_vsi *vsi)
ice_destroy_xdp_rings(vsi, ICE_XDP_CFG_PART);
ice_vsi_clear_rings(vsi);
- ice_vsi_free_q_vectors(vsi);
+ ice_vsi_free_q_vectors_locked(vsi);
ice_vsi_put_qs(vsi);
ice_vsi_free_arrays(vsi);
@@ -2530,6 +2558,23 @@ void ice_vsi_decfg(struct ice_vsi *vsi)
vsi->agg_node->num_vsis--;
}
+/**
+ * ice_vsi_decfg - remove all VSI configuration
+ * @vsi: pointer to VSI
+ */
+void ice_vsi_decfg(struct ice_vsi *vsi)
+{
+ struct net_device *dev = vsi->netdev;
+
+ if (dev)
+ netdev_lock(dev);
+
+ ice_vsi_decfg_locked(vsi);
+
+ if (dev)
+ netdev_unlock(dev);
+}
+
/**
* ice_vsi_setup - Set up a VSI by a given type
* @pf: board private structure
@@ -2703,7 +2748,7 @@ void ice_vsi_close(struct ice_vsi *vsi)
if (!test_and_set_bit(ICE_VSI_DOWN, vsi->state))
ice_down(vsi);
- ice_vsi_clear_napi_queues(vsi);
+ ice_vsi_clear_napi_queues_locked(vsi);
ice_vsi_free_irq(vsi);
ice_vsi_free_tx_rings(vsi);
ice_vsi_free_rx_rings(vsi);
@@ -2772,12 +2817,13 @@ void ice_dis_vsi(struct ice_vsi *vsi, bool locked)
}
/**
- * ice_vsi_set_napi_queues - associate netdev queues with napi
+ * ice_vsi_set_napi_queues_locked - associate netdev queues with napi
* @vsi: VSI pointer
*
* Associate queue[s] with napi for all vectors.
+ * Must be called only with the netdev_lock taken.
*/
-void ice_vsi_set_napi_queues(struct ice_vsi *vsi)
+void ice_vsi_set_napi_queues_locked(struct ice_vsi *vsi)
{
struct net_device *netdev = vsi->netdev;
int q_idx, v_idx;
@@ -2785,7 +2831,6 @@ void ice_vsi_set_napi_queues(struct ice_vsi *vsi)
if (!netdev)
return;
- ASSERT_RTNL();
ice_for_each_rxq(vsi, q_idx)
if (vsi->rx_rings[q_idx] && vsi->rx_rings[q_idx]->q_vector)
netif_queue_set_napi(netdev, q_idx, NETDEV_QUEUE_TYPE_RX,
@@ -2799,17 +2844,37 @@ void ice_vsi_set_napi_queues(struct ice_vsi *vsi)
ice_for_each_q_vector(vsi, v_idx) {
struct ice_q_vector *q_vector = vsi->q_vectors[v_idx];
- netif_napi_set_irq(&q_vector->napi, q_vector->irq.virq);
+ netif_napi_set_irq_locked(&q_vector->napi, q_vector->irq.virq);
}
}
/**
- * ice_vsi_clear_napi_queues - dissociate netdev queues from napi
+ * ice_vsi_set_napi_queues - associate VSI queues with NAPIs
* @vsi: VSI pointer
*
+ * Version of ice_vsi_set_napi_queues_locked() that takes the netdev_lock,
+ * to use it outside of the net_device_ops context.
+ */
+void ice_vsi_set_napi_queues(struct ice_vsi *vsi)
+{
+ struct net_device *netdev = vsi->netdev;
+
+ if (!netdev)
+ return;
+
+ netdev_lock(netdev);
+ ice_vsi_set_napi_queues_locked(vsi);
+ netdev_unlock(netdev);
+}
+
+/**
+ * ice_vsi_clear_napi_queues_locked - dissociate netdev queues from napi
+ * @vsi: VSI to process
+ *
* Clear the association between all VSI queues queue[s] and napi.
+ * Must be called only with the netdev_lock taken.
*/
-void ice_vsi_clear_napi_queues(struct ice_vsi *vsi)
+void ice_vsi_clear_napi_queues_locked(struct ice_vsi *vsi)
{
struct net_device *netdev = vsi->netdev;
int q_idx, v_idx;
@@ -2817,12 +2882,11 @@ void ice_vsi_clear_napi_queues(struct ice_vsi *vsi)
if (!netdev)
return;
- ASSERT_RTNL();
/* Clear the NAPI's interrupt number */
ice_for_each_q_vector(vsi, v_idx) {
struct ice_q_vector *q_vector = vsi->q_vectors[v_idx];
- netif_napi_set_irq(&q_vector->napi, -1);
+ netif_napi_set_irq_locked(&q_vector->napi, -1);
}
ice_for_each_txq(vsi, q_idx)
@@ -2832,6 +2896,25 @@ void ice_vsi_clear_napi_queues(struct ice_vsi *vsi)
netif_queue_set_napi(netdev, q_idx, NETDEV_QUEUE_TYPE_RX, NULL);
}
+/**
+ * ice_vsi_clear_napi_queues - dissociate VSI queues from NAPIs
+ * @vsi: VSI to process
+ *
+ * Version of ice_vsi_clear_napi_queues_locked() that takes the netdev lock,
+ * to use it outside of the net_device_ops context.
+ */
+void ice_vsi_clear_napi_queues(struct ice_vsi *vsi)
+{
+ struct net_device *netdev = vsi->netdev;
+
+ if (!netdev)
+ return;
+
+ netdev_lock(netdev);
+ ice_vsi_clear_napi_queues_locked(vsi);
+ netdev_unlock(netdev);
+}
+
/**
* ice_napi_add - register NAPI handler for the VSI
* @vsi: VSI for which NAPI handler is to be registered
@@ -3069,16 +3152,17 @@ ice_vsi_realloc_stat_arrays(struct ice_vsi *vsi)
}
/**
- * ice_vsi_rebuild - Rebuild VSI after reset
+ * ice_vsi_rebuild_locked - Rebuild VSI after reset
* @vsi: VSI to be rebuild
* @vsi_flags: flags used for VSI rebuild flow
*
* Set vsi_flags to ICE_VSI_FLAG_INIT to initialize a new VSI, or
* ICE_VSI_FLAG_NO_INIT to rebuild an existing VSI in hardware.
+ * Should be called only under the netdev lock.
*
* Returns 0 on success and negative value on failure
*/
-int ice_vsi_rebuild(struct ice_vsi *vsi, u32 vsi_flags)
+int ice_vsi_rebuild_locked(struct ice_vsi *vsi, u32 vsi_flags)
{
struct ice_coalesce_stored *coalesce;
int prev_num_q_vectors;
@@ -3099,8 +3183,8 @@ int ice_vsi_rebuild(struct ice_vsi *vsi, u32 vsi_flags)
if (ret)
goto unlock;
- ice_vsi_decfg(vsi);
- ret = ice_vsi_cfg_def(vsi);
+ ice_vsi_decfg_locked(vsi);
+ ret = ice_vsi_cfg_def_locked(vsi);
if (ret)
goto unlock;
@@ -3137,6 +3221,32 @@ int ice_vsi_rebuild(struct ice_vsi *vsi, u32 vsi_flags)
return ret;
}
+/**
+ * ice_vsi_rebuild - Rebuild VSI after reset
+ * @vsi: VSI to be rebuild
+ * @vsi_flags: flags used for VSI rebuild flow
+ *
+ * Set vsi_flags to ICE_VSI_FLAG_INIT to initialize a new VSI, or
+ * ICE_VSI_FLAG_NO_INIT to rebuild an existing VSI in hardware.
+ *
+ * Return: 0 on success, -errno on failure.
+ */
+int ice_vsi_rebuild(struct ice_vsi *vsi, u32 vsi_flags)
+{
+ struct net_device *dev = vsi->netdev;
+ int ret;
+
+ if (dev)
+ netdev_lock(dev);
+
+ ret = ice_vsi_rebuild_locked(vsi, vsi_flags);
+
+ if (dev)
+ netdev_unlock(dev);
+
+ return ret;
+}
+
/**
* ice_is_reset_in_progress - check for a reset in progress
* @state: PF state field
diff --git a/drivers/net/ethernet/intel/ice/ice_lib.h b/drivers/net/ethernet/intel/ice/ice_lib.h
index 49454d98dcfe..e55b72db72c4 100644
--- a/drivers/net/ethernet/intel/ice/ice_lib.h
+++ b/drivers/net/ethernet/intel/ice/ice_lib.h
@@ -53,9 +53,11 @@ struct ice_vsi *
ice_vsi_setup(struct ice_pf *pf, struct ice_vsi_cfg_params *params);
void ice_vsi_set_napi_queues(struct ice_vsi *vsi);
-void ice_napi_add(struct ice_vsi *vsi);
-
+void ice_vsi_set_napi_queues_locked(struct ice_vsi *vsi);
void ice_vsi_clear_napi_queues(struct ice_vsi *vsi);
+void ice_vsi_clear_napi_queues_locked(struct ice_vsi *vsi);
+
+void ice_napi_add(struct ice_vsi *vsi);
int ice_vsi_release(struct ice_vsi *vsi);
@@ -66,6 +68,7 @@ int ice_ena_vsi(struct ice_vsi *vsi, bool locked);
void ice_vsi_decfg(struct ice_vsi *vsi);
void ice_dis_vsi(struct ice_vsi *vsi, bool locked);
+int ice_vsi_rebuild_locked(struct ice_vsi *vsi, u32 vsi_flags);
int ice_vsi_rebuild(struct ice_vsi *vsi, u32 vsi_flags);
int ice_vsi_cfg(struct ice_vsi *vsi);
struct ice_vsi *ice_vsi_alloc(struct ice_pf *pf);
diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
index 4da37caa3ec9..27ee7c1ee19c 100644
--- a/drivers/net/ethernet/intel/ice/ice_main.c
+++ b/drivers/net/ethernet/intel/ice/ice_main.c
@@ -3525,6 +3525,7 @@ static void ice_set_ops(struct ice_vsi *vsi)
}
netdev->netdev_ops = &ice_netdev_ops;
+ netdev->request_ops_lock = true;
netdev->udp_tunnel_nic_info = &pf->hw.udp_tunnel_nic;
netdev->xdp_metadata_ops = &ice_xdp_md_ops;
ice_set_ethtool_ops(netdev);
@@ -4131,6 +4132,7 @@ bool ice_is_wol_supported(struct ice_hw *hw)
* @locked: is adev device_lock held
*
* Only change the number of queues if new_tx, or new_rx is non-0.
+ * Note that it should be called only with the netdev lock taken.
*
* Returns 0 on success.
*/
@@ -4156,7 +4158,7 @@ int ice_vsi_recfg_qs(struct ice_vsi *vsi, int new_rx, int new_tx, bool locked)
/* set for the next time the netdev is started */
if (!netif_running(vsi->netdev)) {
- err = ice_vsi_rebuild(vsi, ICE_VSI_FLAG_NO_INIT);
+ err = ice_vsi_rebuild_locked(vsi, ICE_VSI_FLAG_NO_INIT);
if (err)
goto rebuild_err;
dev_dbg(ice_pf_to_dev(pf), "Link is down, queue count change happens when link is brought up\n");
@@ -4164,7 +4166,7 @@ int ice_vsi_recfg_qs(struct ice_vsi *vsi, int new_rx, int new_tx, bool locked)
}
ice_vsi_close(vsi);
- err = ice_vsi_rebuild(vsi, ICE_VSI_FLAG_NO_INIT);
+ err = ice_vsi_rebuild_locked(vsi, ICE_VSI_FLAG_NO_INIT);
if (err)
goto rebuild_err;
@@ -5534,16 +5536,17 @@ static int ice_reinit_interrupt_scheme(struct ice_pf *pf)
/* Remap vectors and rings, after successful re-init interrupts */
ice_for_each_vsi(pf, v) {
- if (!pf->vsi[v])
+ struct ice_vsi *vsi = pf->vsi[v];
+
+ if (!vsi)
continue;
- ret = ice_vsi_alloc_q_vectors(pf->vsi[v]);
+ ret = ice_vsi_alloc_q_vectors(vsi);
if (ret)
goto err_reinit;
- ice_vsi_map_rings_to_vectors(pf->vsi[v]);
- rtnl_lock();
- ice_vsi_set_napi_queues(pf->vsi[v]);
- rtnl_unlock();
+
+ ice_vsi_map_rings_to_vectors(vsi);
+ ice_vsi_set_napi_queues(vsi);
}
ret = ice_req_irq_msix_misc(pf);
@@ -5556,13 +5559,15 @@ static int ice_reinit_interrupt_scheme(struct ice_pf *pf)
return 0;
err_reinit:
- while (v--)
- if (pf->vsi[v]) {
- rtnl_lock();
- ice_vsi_clear_napi_queues(pf->vsi[v]);
- rtnl_unlock();
- ice_vsi_free_q_vectors(pf->vsi[v]);
- }
+ while (v--) {
+ struct ice_vsi *vsi = pf->vsi[v];
+
+ if (!vsi)
+ continue;
+
+ ice_vsi_clear_napi_queues(vsi);
+ ice_vsi_free_q_vectors(vsi);
+ }
return ret;
}
@@ -5624,14 +5629,17 @@ static int ice_suspend(struct device *dev)
* to CPU0.
*/
ice_free_irq_msix_misc(pf);
+
ice_for_each_vsi(pf, v) {
- if (!pf->vsi[v])
+ struct ice_vsi *vsi = pf->vsi[v];
+
+ if (!vsi)
continue;
- rtnl_lock();
- ice_vsi_clear_napi_queues(pf->vsi[v]);
- rtnl_unlock();
- ice_vsi_free_q_vectors(pf->vsi[v]);
+
+ ice_vsi_clear_napi_queues(vsi);
+ ice_vsi_free_q_vectors(vsi);
}
+
ice_clear_interrupt_scheme(pf);
pci_save_state(pdev);
@@ -6759,7 +6767,7 @@ static void ice_napi_enable_all(struct ice_vsi *vsi)
ice_init_moderation(q_vector);
if (q_vector->rx.rx_ring || q_vector->tx.tx_ring)
- napi_enable(&q_vector->napi);
+ napi_enable_locked(&q_vector->napi);
}
}
@@ -7258,7 +7266,7 @@ static void ice_napi_disable_all(struct ice_vsi *vsi)
struct ice_q_vector *q_vector = vsi->q_vectors[q_idx];
if (q_vector->rx.rx_ring || q_vector->tx.tx_ring)
- napi_disable(&q_vector->napi);
+ napi_disable_locked(&q_vector->napi);
cancel_work_sync(&q_vector->tx.dim.work);
cancel_work_sync(&q_vector->rx.dim.work);
@@ -7558,7 +7566,7 @@ int ice_vsi_open(struct ice_vsi *vsi)
if (err)
goto err_set_qs;
- ice_vsi_set_napi_queues(vsi);
+ ice_vsi_set_napi_queues_locked(vsi);
}
err = ice_up_complete(vsi);
diff --git a/drivers/net/ethernet/intel/ice/ice_sf_eth.c b/drivers/net/ethernet/intel/ice/ice_sf_eth.c
index 1a2c94375ca7..2c3db1b03055 100644
--- a/drivers/net/ethernet/intel/ice/ice_sf_eth.c
+++ b/drivers/net/ethernet/intel/ice/ice_sf_eth.c
@@ -58,6 +58,7 @@ static int ice_sf_cfg_netdev(struct ice_dynamic_port *dyn_port,
eth_hw_addr_set(netdev, dyn_port->hw_addr);
ether_addr_copy(netdev->perm_addr, dyn_port->hw_addr);
netdev->netdev_ops = &ice_sf_netdev_ops;
+ netdev->request_ops_lock = true;
SET_NETDEV_DEVLINK_PORT(netdev, devlink_port);
err = register_netdev(netdev);
diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c
index 953e68ed0f9a..6d08a11a86a8 100644
--- a/drivers/net/ethernet/intel/ice/ice_xsk.c
+++ b/drivers/net/ethernet/intel/ice/ice_xsk.c
@@ -33,9 +33,9 @@ ice_qvec_toggle_napi(struct ice_vsi *vsi, struct ice_q_vector *q_vector,
return;
if (enable)
- napi_enable(&q_vector->napi);
+ napi_enable_locked(&q_vector->napi);
else
- napi_disable(&q_vector->napi);
+ napi_disable_locked(&q_vector->napi);
}
/**
--
2.47.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH net-next 4/9] ice: implement Rx queue management ops
2026-02-06 17:48 [PATCH net-next 0/9][pull request] Intel Wired LAN Driver Updates 2026-02-06 (libeth, ice, i40e, ixgbe) Tony Nguyen
` (2 preceding siblings ...)
2026-02-06 17:49 ` [PATCH net-next 3/9] ice: migrate to netdev ops lock Tony Nguyen
@ 2026-02-06 17:49 ` Tony Nguyen
2026-02-06 17:49 ` [PATCH net-next 5/9] ice: add support for transmitting unreadable frags Tony Nguyen
` (4 subsequent siblings)
8 siblings, 0 replies; 15+ messages in thread
From: Tony Nguyen @ 2026-02-06 17:49 UTC (permalink / raw)
To: davem, kuba, pabeni, edumazet, andrew+netdev, netdev
Cc: Alexander Lobakin, anthony.l.nguyen, jacob.e.keller,
nxne.cnse.osdt.itp.upstreaming, horms, maciej.fijalkowski,
magnus.karlsson, ast, daniel, hawk, john.fastabend, sdf, bpf,
Aleksandr Loktionov, Kohei Enju, Alexander Nowlin
From: Alexander Lobakin <aleksander.lobakin@intel.com>
Now ice is ready to get queue_mgmt_ops support. It already has API
to disable/reconfig/enable one particular queue (for XSk). Reuse as
much of its code as possible to implement Rx queue management
callbacks and vice versa -- ice_queue_mem_{alloc,free}() can be
reused during ifup/ifdown to elide code duplication.
With this, ice passes the io_uring zcrx selftests, meaning the Rx
part of netmem/MP support is done.
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com>
Co-developed-by: Kohei Enju <kohei@enjuk.jp>
Signed-off-by: Kohei Enju <kohei@enjuk.jp>
Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Tested-by: Alexander Nowlin <alexander.nowlin@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
---
drivers/net/ethernet/intel/ice/ice_base.c | 194 +++++++++++++------
drivers/net/ethernet/intel/ice/ice_ethtool.c | 1 +
drivers/net/ethernet/intel/ice/ice_lib.h | 5 +
drivers/net/ethernet/intel/ice/ice_main.c | 2 +-
drivers/net/ethernet/intel/ice/ice_sf_eth.c | 2 +-
drivers/net/ethernet/intel/ice/ice_txrx.c | 26 ++-
drivers/net/ethernet/intel/ice/ice_txrx.h | 2 +
7 files changed, 166 insertions(+), 66 deletions(-)
diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c
index f084b1c48e5d..052a6fae50fb 100644
--- a/drivers/net/ethernet/intel/ice/ice_base.c
+++ b/drivers/net/ethernet/intel/ice/ice_base.c
@@ -651,6 +651,43 @@ static int ice_rxq_pp_create(struct ice_rx_ring *rq)
return err;
}
+static int ice_queue_mem_alloc(struct net_device *dev,
+ struct netdev_queue_config *qcfg,
+ void *per_queue_mem, int idx)
+{
+ const struct ice_netdev_priv *priv = netdev_priv(dev);
+ const struct ice_rx_ring *real = priv->vsi->rx_rings[idx];
+ struct ice_rx_ring *new = per_queue_mem;
+ int ret;
+
+ new->count = real->count;
+ new->netdev = real->netdev;
+ new->q_index = real->q_index;
+ new->q_vector = real->q_vector;
+ new->vsi = real->vsi;
+
+ ret = ice_rxq_pp_create(new);
+ if (ret)
+ return ret;
+
+ if (!netif_running(dev))
+ return 0;
+
+ ret = __xdp_rxq_info_reg(&new->xdp_rxq, new->netdev, new->q_index,
+ new->q_vector->napi.napi_id, new->rx_buf_len);
+ if (ret)
+ goto err_destroy_fq;
+
+ xdp_rxq_info_attach_page_pool(&new->xdp_rxq, new->pp);
+
+ return 0;
+
+err_destroy_fq:
+ ice_rxq_pp_destroy(new);
+
+ return ret;
+}
+
/**
* ice_vsi_cfg_rxq - Configure an Rx queue
* @ring: the ring being configured
@@ -665,23 +702,12 @@ static int ice_vsi_cfg_rxq(struct ice_rx_ring *ring)
int err;
if (ring->vsi->type == ICE_VSI_PF || ring->vsi->type == ICE_VSI_SF) {
- if (!xdp_rxq_info_is_reg(&ring->xdp_rxq)) {
- err = __xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev,
- ring->q_index,
- ring->q_vector->napi.napi_id,
- ring->rx_buf_len);
- if (err)
- return err;
- }
-
ice_rx_xsk_pool(ring);
err = ice_realloc_rx_xdp_bufs(ring, ring->xsk_pool);
if (err)
return err;
if (ring->xsk_pool) {
- xdp_rxq_info_unreg(&ring->xdp_rxq);
-
rx_buf_len =
xsk_pool_get_rx_frame_size(ring->xsk_pool);
err = __xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev,
@@ -700,20 +726,10 @@ static int ice_vsi_cfg_rxq(struct ice_rx_ring *ring)
dev_info(dev, "Registered XDP mem model MEM_TYPE_XSK_BUFF_POOL on Rx ring %d\n",
ring->q_index);
} else {
- err = ice_rxq_pp_create(ring);
+ err = ice_queue_mem_alloc(ring->netdev, NULL, ring,
+ ring->q_index);
if (err)
return err;
-
- if (!xdp_rxq_info_is_reg(&ring->xdp_rxq)) {
- err = __xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev,
- ring->q_index,
- ring->q_vector->napi.napi_id,
- ring->rx_buf_len);
- if (err)
- goto err_destroy_fq;
- }
- xdp_rxq_info_attach_page_pool(&ring->xdp_rxq,
- ring->pp);
}
}
@@ -722,7 +738,7 @@ static int ice_vsi_cfg_rxq(struct ice_rx_ring *ring)
if (err) {
dev_err(dev, "ice_setup_rx_ctx failed for RxQ %d, err %d\n",
ring->q_index, err);
- goto err_destroy_fq;
+ goto err_clean_rq;
}
if (ring->xsk_pool) {
@@ -753,12 +769,12 @@ static int ice_vsi_cfg_rxq(struct ice_rx_ring *ring)
err = ice_alloc_rx_bufs(ring, num_bufs);
if (err)
- goto err_destroy_fq;
+ goto err_clean_rq;
return 0;
-err_destroy_fq:
- ice_rxq_pp_destroy(ring);
+err_clean_rq:
+ ice_clean_rx_ring(ring);
return err;
}
@@ -1470,27 +1486,7 @@ static void ice_qp_reset_stats(struct ice_vsi *vsi, u16 q_idx)
sizeof(vsi->xdp_rings[q_idx]->ring_stats->stats));
}
-/**
- * ice_qp_clean_rings - Cleans all the rings of a given index
- * @vsi: VSI that contains rings of interest
- * @q_idx: ring index in array
- */
-static void ice_qp_clean_rings(struct ice_vsi *vsi, u16 q_idx)
-{
- ice_clean_tx_ring(vsi->tx_rings[q_idx]);
- if (vsi->xdp_rings)
- ice_clean_tx_ring(vsi->xdp_rings[q_idx]);
- ice_clean_rx_ring(vsi->rx_rings[q_idx]);
-}
-
-/**
- * ice_qp_dis - Disables a queue pair
- * @vsi: VSI of interest
- * @q_idx: ring index in array
- *
- * Returns 0 on success, negative on failure.
- */
-int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx)
+static int __ice_qp_dis(struct ice_vsi *vsi, u16 q_idx)
{
struct ice_txq_meta txq_meta = { };
struct ice_q_vector *q_vector;
@@ -1529,23 +1525,35 @@ int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx)
}
ice_vsi_ctrl_one_rx_ring(vsi, false, q_idx, false);
- ice_qp_clean_rings(vsi, q_idx);
ice_qp_reset_stats(vsi, q_idx);
+ ice_clean_tx_ring(vsi->tx_rings[q_idx]);
+ if (vsi->xdp_rings)
+ ice_clean_tx_ring(vsi->xdp_rings[q_idx]);
+
return fail;
}
/**
- * ice_qp_ena - Enables a queue pair
+ * ice_qp_dis - Disables a queue pair
* @vsi: VSI of interest
* @q_idx: ring index in array
*
* Returns 0 on success, negative on failure.
*/
-int ice_qp_ena(struct ice_vsi *vsi, u16 q_idx)
+int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx)
+{
+ int ret;
+
+ ret = __ice_qp_dis(vsi, q_idx);
+ ice_clean_rx_ring(vsi->rx_rings[q_idx]);
+
+ return ret;
+}
+
+static int __ice_qp_ena(struct ice_vsi *vsi, u16 q_idx, int fail)
{
struct ice_q_vector *q_vector;
- int fail = 0;
bool link_up;
int err;
@@ -1563,10 +1571,6 @@ int ice_qp_ena(struct ice_vsi *vsi, u16 q_idx)
ice_tx_xsk_pool(vsi, q_idx);
}
- err = ice_vsi_cfg_single_rxq(vsi, q_idx);
- if (!fail)
- fail = err;
-
q_vector = vsi->rx_rings[q_idx]->q_vector;
ice_qvec_cfg_msix(vsi, q_vector, q_idx);
@@ -1587,3 +1591,81 @@ int ice_qp_ena(struct ice_vsi *vsi, u16 q_idx)
return fail;
}
+
+/**
+ * ice_qp_ena - Enables a queue pair
+ * @vsi: VSI of interest
+ * @q_idx: ring index in array
+ *
+ * Returns 0 on success, negative on failure.
+ */
+int ice_qp_ena(struct ice_vsi *vsi, u16 q_idx)
+{
+ return __ice_qp_ena(vsi, q_idx, ice_vsi_cfg_single_rxq(vsi, q_idx));
+}
+
+static int ice_queue_start(struct net_device *dev,
+ struct netdev_queue_config *qcfg,
+ void *per_queue_mem, int idx)
+{
+ const struct ice_netdev_priv *priv = netdev_priv(dev);
+ struct ice_rx_ring *real = priv->vsi->rx_rings[idx];
+ struct ice_rx_ring *new = per_queue_mem;
+ struct napi_struct *napi;
+ int ret;
+
+ real->pp = new->pp;
+ real->rx_fqes = new->rx_fqes;
+ real->hdr_fqes = new->hdr_fqes;
+ real->hdr_pp = new->hdr_pp;
+
+ real->hdr_truesize = new->hdr_truesize;
+ real->truesize = new->truesize;
+ real->rx_hdr_len = new->rx_hdr_len;
+ real->rx_buf_len = new->rx_buf_len;
+
+ memcpy(&real->xdp_rxq, &new->xdp_rxq, sizeof(new->xdp_rxq));
+
+ ret = ice_setup_rx_ctx(real);
+ if (ret)
+ return ret;
+
+ napi = &real->q_vector->napi;
+
+ page_pool_enable_direct_recycling(real->pp, napi);
+ if (real->hdr_pp)
+ page_pool_enable_direct_recycling(real->hdr_pp, napi);
+
+ ret = ice_alloc_rx_bufs(real, ICE_DESC_UNUSED(real));
+
+ return __ice_qp_ena(priv->vsi, idx, ret);
+}
+
+static int ice_queue_stop(struct net_device *dev, void *per_queue_mem,
+ int idx)
+{
+ const struct ice_netdev_priv *priv = netdev_priv(dev);
+ struct ice_rx_ring *real = priv->vsi->rx_rings[idx];
+ int ret;
+
+ ret = __ice_qp_dis(priv->vsi, idx);
+ if (ret)
+ return ret;
+
+ page_pool_disable_direct_recycling(real->pp);
+ if (real->hdr_pp)
+ page_pool_disable_direct_recycling(real->hdr_pp);
+
+ ice_zero_rx_ring(real);
+ memcpy(per_queue_mem, real, sizeof(*real));
+
+ return 0;
+}
+
+const struct netdev_queue_mgmt_ops ice_queue_mgmt_ops = {
+ .ndo_queue_mem_alloc = ice_queue_mem_alloc,
+ .ndo_queue_mem_free = ice_queue_mem_free,
+ .ndo_queue_mem_size = sizeof(struct ice_rx_ring),
+ .ndo_queue_start = ice_queue_start,
+ .ndo_queue_stop = ice_queue_stop,
+};
diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c
index c6bc29cfb8e6..8df07e92187f 100644
--- a/drivers/net/ethernet/intel/ice/ice_ethtool.c
+++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c
@@ -3338,6 +3338,7 @@ ice_set_ringparam(struct net_device *netdev, struct ethtool_ringparam *ring,
rx_rings[i].cached_phctime = pf->ptp.cached_phc_time;
rx_rings[i].desc = NULL;
rx_rings[i].xdp_buf = NULL;
+ memset(&rx_rings[i].xdp_rxq, 0, sizeof(rx_rings[i].xdp_rxq));
/* this is to allow wr32 to have something to write to
* during early allocation of Rx buffers
diff --git a/drivers/net/ethernet/intel/ice/ice_lib.h b/drivers/net/ethernet/intel/ice/ice_lib.h
index e55b72db72c4..bccf173959be 100644
--- a/drivers/net/ethernet/intel/ice/ice_lib.h
+++ b/drivers/net/ethernet/intel/ice/ice_lib.h
@@ -4,6 +4,8 @@
#ifndef _ICE_LIB_H_
#define _ICE_LIB_H_
+#include <net/netdev_queues.h>
+
#include "ice.h"
#include "ice_vlan.h"
@@ -133,4 +135,7 @@ void ice_clear_feature_support(struct ice_pf *pf, enum ice_feature f);
void ice_init_feature_support(struct ice_pf *pf);
bool ice_vsi_is_rx_queue_active(struct ice_vsi *vsi);
void ice_vsi_update_l2tsel(struct ice_vsi *vsi, enum ice_l2tsel l2tsel);
+
+extern const struct netdev_queue_mgmt_ops ice_queue_mgmt_ops;
+
#endif /* !_ICE_LIB_H_ */
diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
index 27ee7c1ee19c..d4480698f917 100644
--- a/drivers/net/ethernet/intel/ice/ice_main.c
+++ b/drivers/net/ethernet/intel/ice/ice_main.c
@@ -3525,7 +3525,7 @@ static void ice_set_ops(struct ice_vsi *vsi)
}
netdev->netdev_ops = &ice_netdev_ops;
- netdev->request_ops_lock = true;
+ netdev->queue_mgmt_ops = &ice_queue_mgmt_ops;
netdev->udp_tunnel_nic_info = &pf->hw.udp_tunnel_nic;
netdev->xdp_metadata_ops = &ice_xdp_md_ops;
ice_set_ethtool_ops(netdev);
diff --git a/drivers/net/ethernet/intel/ice/ice_sf_eth.c b/drivers/net/ethernet/intel/ice/ice_sf_eth.c
index 2c3db1b03055..41e1606a8222 100644
--- a/drivers/net/ethernet/intel/ice/ice_sf_eth.c
+++ b/drivers/net/ethernet/intel/ice/ice_sf_eth.c
@@ -58,7 +58,7 @@ static int ice_sf_cfg_netdev(struct ice_dynamic_port *dyn_port,
eth_hw_addr_set(netdev, dyn_port->hw_addr);
ether_addr_copy(netdev->perm_addr, dyn_port->hw_addr);
netdev->netdev_ops = &ice_sf_netdev_ops;
- netdev->request_ops_lock = true;
+ netdev->queue_mgmt_ops = &ice_queue_mgmt_ops;
SET_NETDEV_DEVLINK_PORT(netdev, devlink_port);
err = register_netdev(netdev);
diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c
index 396326a6d5be..d2cddb24ea05 100644
--- a/drivers/net/ethernet/intel/ice/ice_txrx.c
+++ b/drivers/net/ethernet/intel/ice/ice_txrx.c
@@ -530,17 +530,13 @@ void ice_rxq_pp_destroy(struct ice_rx_ring *rq)
rq->hdr_pp = NULL;
}
-/**
- * ice_clean_rx_ring - Free Rx buffers
- * @rx_ring: ring to be cleaned
- */
-void ice_clean_rx_ring(struct ice_rx_ring *rx_ring)
+void ice_queue_mem_free(struct net_device *dev, void *per_queue_mem)
{
- u32 size;
+ struct ice_rx_ring *rx_ring = per_queue_mem;
if (rx_ring->xsk_pool) {
ice_xsk_clean_rx_ring(rx_ring);
- goto rx_skip_free;
+ return;
}
/* ring already cleared, nothing to do */
@@ -567,8 +563,12 @@ void ice_clean_rx_ring(struct ice_rx_ring *rx_ring)
}
ice_rxq_pp_destroy(rx_ring);
+}
+
+void ice_zero_rx_ring(struct ice_rx_ring *rx_ring)
+{
+ size_t size;
-rx_skip_free:
/* Zero out the descriptor ring */
size = ALIGN(rx_ring->count * sizeof(union ice_32byte_rx_desc),
PAGE_SIZE);
@@ -578,6 +578,16 @@ void ice_clean_rx_ring(struct ice_rx_ring *rx_ring)
rx_ring->next_to_use = 0;
}
+/**
+ * ice_clean_rx_ring - Free Rx buffers
+ * @rx_ring: ring to be cleaned
+ */
+void ice_clean_rx_ring(struct ice_rx_ring *rx_ring)
+{
+ ice_queue_mem_free(rx_ring->netdev, rx_ring);
+ ice_zero_rx_ring(rx_ring);
+}
+
/**
* ice_free_rx_ring - Free Rx resources
* @rx_ring: ring to clean the resources from
diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.h b/drivers/net/ethernet/intel/ice/ice_txrx.h
index b6547e1b7c42..557b5e656bb0 100644
--- a/drivers/net/ethernet/intel/ice/ice_txrx.h
+++ b/drivers/net/ethernet/intel/ice/ice_txrx.h
@@ -458,6 +458,8 @@ u16
ice_select_queue(struct net_device *dev, struct sk_buff *skb,
struct net_device *sb_dev);
void ice_clean_tx_ring(struct ice_tx_ring *tx_ring);
+void ice_queue_mem_free(struct net_device *dev, void *per_queue_mem);
+void ice_zero_rx_ring(struct ice_rx_ring *rx_ring);
void ice_clean_rx_ring(struct ice_rx_ring *rx_ring);
int ice_setup_tx_ring(struct ice_tx_ring *tx_ring);
int ice_setup_rx_ring(struct ice_rx_ring *rx_ring);
--
2.47.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH net-next 5/9] ice: add support for transmitting unreadable frags
2026-02-06 17:48 [PATCH net-next 0/9][pull request] Intel Wired LAN Driver Updates 2026-02-06 (libeth, ice, i40e, ixgbe) Tony Nguyen
` (3 preceding siblings ...)
2026-02-06 17:49 ` [PATCH net-next 4/9] ice: implement Rx queue management ops Tony Nguyen
@ 2026-02-06 17:49 ` Tony Nguyen
2026-02-06 17:49 ` [PATCH net-next 6/9] ice: Make name member of struct ice_cgu_pin_desc const Tony Nguyen
` (3 subsequent siblings)
8 siblings, 0 replies; 15+ messages in thread
From: Tony Nguyen @ 2026-02-06 17:49 UTC (permalink / raw)
To: davem, kuba, pabeni, edumazet, andrew+netdev, netdev
Cc: Alexander Lobakin, anthony.l.nguyen, jacob.e.keller,
nxne.cnse.osdt.itp.upstreaming, horms, maciej.fijalkowski,
magnus.karlsson, ast, daniel, hawk, john.fastabend, sdf, bpf,
Aleksandr Loktionov, Alexander Nowlin
From: Alexander Lobakin <aleksander.lobakin@intel.com>
Advertise netmem Tx support in ice. The only change needed is to set
ICE_TX_BUF_FRAG conditionally, only when skb_frag_is_net_iov() is
false. Otherwise, the Tx buffer type will be ICE_TX_BUF_EMPTY and
the driver will skip the DMA unmapping operation.
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com>
Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Tested-by: Alexander Nowlin <alexander.nowlin@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
---
drivers/net/ethernet/intel/ice/ice_main.c | 1 +
drivers/net/ethernet/intel/ice/ice_sf_eth.c | 1 +
drivers/net/ethernet/intel/ice/ice_txrx.c | 17 +++++++++++++----
3 files changed, 15 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
index d4480698f917..7674ae6169cd 100644
--- a/drivers/net/ethernet/intel/ice/ice_main.c
+++ b/drivers/net/ethernet/intel/ice/ice_main.c
@@ -3526,6 +3526,7 @@ static void ice_set_ops(struct ice_vsi *vsi)
netdev->netdev_ops = &ice_netdev_ops;
netdev->queue_mgmt_ops = &ice_queue_mgmt_ops;
+ netdev->netmem_tx = true;
netdev->udp_tunnel_nic_info = &pf->hw.udp_tunnel_nic;
netdev->xdp_metadata_ops = &ice_xdp_md_ops;
ice_set_ethtool_ops(netdev);
diff --git a/drivers/net/ethernet/intel/ice/ice_sf_eth.c b/drivers/net/ethernet/intel/ice/ice_sf_eth.c
index 41e1606a8222..51ad13c9d7f9 100644
--- a/drivers/net/ethernet/intel/ice/ice_sf_eth.c
+++ b/drivers/net/ethernet/intel/ice/ice_sf_eth.c
@@ -59,6 +59,7 @@ static int ice_sf_cfg_netdev(struct ice_dynamic_port *dyn_port,
ether_addr_copy(netdev->perm_addr, dyn_port->hw_addr);
netdev->netdev_ops = &ice_sf_netdev_ops;
netdev->queue_mgmt_ops = &ice_queue_mgmt_ops;
+ netdev->netmem_tx = true;
SET_NETDEV_DEVLINK_PORT(netdev, devlink_port);
err = register_netdev(netdev);
diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c
index d2cddb24ea05..54e44801da95 100644
--- a/drivers/net/ethernet/intel/ice/ice_txrx.c
+++ b/drivers/net/ethernet/intel/ice/ice_txrx.c
@@ -113,11 +113,17 @@ ice_prgm_fdir_fltr(struct ice_vsi *vsi, struct ice_fltr_desc *fdir_desc,
static void
ice_unmap_and_free_tx_buf(struct ice_tx_ring *ring, struct ice_tx_buf *tx_buf)
{
- if (tx_buf->type != ICE_TX_BUF_XDP_TX && dma_unmap_len(tx_buf, len))
+ switch (tx_buf->type) {
+ case ICE_TX_BUF_DUMMY:
+ case ICE_TX_BUF_FRAG:
+ case ICE_TX_BUF_SKB:
+ case ICE_TX_BUF_XDP_XMIT:
dma_unmap_page(ring->dev,
dma_unmap_addr(tx_buf, dma),
dma_unmap_len(tx_buf, len),
DMA_TO_DEVICE);
+ break;
+ }
switch (tx_buf->type) {
case ICE_TX_BUF_DUMMY:
@@ -337,12 +343,14 @@ static bool ice_clean_tx_irq(struct ice_tx_ring *tx_ring, int napi_budget)
}
/* unmap any remaining paged data */
- if (dma_unmap_len(tx_buf, len)) {
+ if (tx_buf->type != ICE_TX_BUF_EMPTY) {
dma_unmap_page(tx_ring->dev,
dma_unmap_addr(tx_buf, dma),
dma_unmap_len(tx_buf, len),
DMA_TO_DEVICE);
+
dma_unmap_len_set(tx_buf, len, 0);
+ tx_buf->type = ICE_TX_BUF_EMPTY;
}
}
ice_trace(clean_tx_irq_unmap_eop, tx_ring, tx_desc, tx_buf);
@@ -1492,7 +1500,8 @@ ice_tx_map(struct ice_tx_ring *tx_ring, struct ice_tx_buf *first,
DMA_TO_DEVICE);
tx_buf = &tx_ring->tx_buf[i];
- tx_buf->type = ICE_TX_BUF_FRAG;
+ if (!skb_frag_is_net_iov(frag))
+ tx_buf->type = ICE_TX_BUF_FRAG;
}
/* record SW timestamp if HW timestamp is not available */
@@ -2367,7 +2376,7 @@ void ice_clean_ctrl_tx_irq(struct ice_tx_ring *tx_ring)
}
/* unmap the data header */
- if (dma_unmap_len(tx_buf, len))
+ if (tx_buf->type != ICE_TX_BUF_EMPTY)
dma_unmap_single(tx_ring->dev,
dma_unmap_addr(tx_buf, dma),
dma_unmap_len(tx_buf, len),
--
2.47.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH net-next 6/9] ice: Make name member of struct ice_cgu_pin_desc const
2026-02-06 17:48 [PATCH net-next 0/9][pull request] Intel Wired LAN Driver Updates 2026-02-06 (libeth, ice, i40e, ixgbe) Tony Nguyen
` (4 preceding siblings ...)
2026-02-06 17:49 ` [PATCH net-next 5/9] ice: add support for transmitting unreadable frags Tony Nguyen
@ 2026-02-06 17:49 ` Tony Nguyen
2026-02-06 17:49 ` [PATCH net-next 7/9] i40e: drop useless bitmap_weight() call in i40e_set_rxfh_fields() Tony Nguyen
` (2 subsequent siblings)
8 siblings, 0 replies; 15+ messages in thread
From: Tony Nguyen @ 2026-02-06 17:49 UTC (permalink / raw)
To: davem, kuba, pabeni, edumazet, andrew+netdev, netdev
Cc: Simon Horman, anthony.l.nguyen, przemyslaw.kitszel,
richardcochran, Paul Menzel, Aleksandr Loktionov
From: Simon Horman <horms@kernel.org>
The name member of struct ice_cgu_pin_desc never modified.
Make it const.
Found by inspection.
Compile tested only.
Signed-off-by: Simon Horman <horms@kernel.org>
Reviewed-by: Paul Menzel <pmenzel@molgen.mpg.de>
Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
---
drivers/net/ethernet/intel/ice/ice_ptp_hw.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/intel/ice/ice_ptp_hw.h b/drivers/net/ethernet/intel/ice/ice_ptp_hw.h
index 5896b346e579..9bfd3e79c580 100644
--- a/drivers/net/ethernet/intel/ice/ice_ptp_hw.h
+++ b/drivers/net/ethernet/intel/ice/ice_ptp_hw.h
@@ -258,7 +258,7 @@ enum ice_si_cgu_out_pins {
};
struct ice_cgu_pin_desc {
- char *name;
+ const char *name;
u8 index;
enum dpll_pin_type type;
u32 freq_supp_num;
--
2.47.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH net-next 7/9] i40e: drop useless bitmap_weight() call in i40e_set_rxfh_fields()
2026-02-06 17:48 [PATCH net-next 0/9][pull request] Intel Wired LAN Driver Updates 2026-02-06 (libeth, ice, i40e, ixgbe) Tony Nguyen
` (5 preceding siblings ...)
2026-02-06 17:49 ` [PATCH net-next 6/9] ice: Make name member of struct ice_cgu_pin_desc const Tony Nguyen
@ 2026-02-06 17:49 ` Tony Nguyen
2026-02-06 17:49 ` [PATCH net-next 8/9] i40e: Add missing header Tony Nguyen
2026-02-06 17:49 ` [PATCH net-next 9/9] ixgbe: refactor: use DECLARE_BITMAP for ring state field Tony Nguyen
8 siblings, 0 replies; 15+ messages in thread
From: Tony Nguyen @ 2026-02-06 17:49 UTC (permalink / raw)
To: davem, kuba, pabeni, edumazet, andrew+netdev, netdev
Cc: Yury Norov (NVIDIA), anthony.l.nguyen, Simon Horman, Rinitha S
From: "Yury Norov (NVIDIA)" <yury.norov@gmail.com>
bitmap_weight() is O(N) and useless here, because the following
for_each_set_bit() returns immediately in case of empty flow_pctypes.
Signed-off-by: Yury Norov (NVIDIA) <yury.norov@gmail.com>
Reviewed-by: Simon Horman <horms@kernel.org>
Tested-by: Rinitha S <sx.rinitha@intel.com> (A Contingent worker at Intel)
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
---
.../net/ethernet/intel/i40e/i40e_ethtool.c | 21 +++++++------------
1 file changed, 8 insertions(+), 13 deletions(-)
diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
index 6a47ea0927e9..9fca52f42ce4 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
@@ -3625,6 +3625,7 @@ static int i40e_set_rxfh_fields(struct net_device *netdev,
((u64)i40e_read_rx_ctl(hw, I40E_PFQF_HENA(1)) << 32);
DECLARE_BITMAP(flow_pctypes, FLOW_PCTYPES_SIZE);
u64 i_set, i_setc;
+ u8 flow_id;
bitmap_zero(flow_pctypes, FLOW_PCTYPES_SIZE);
@@ -3708,20 +3709,14 @@ static int i40e_set_rxfh_fields(struct net_device *netdev,
return -EINVAL;
}
- if (bitmap_weight(flow_pctypes, FLOW_PCTYPES_SIZE)) {
- u8 flow_id;
+ for_each_set_bit(flow_id, flow_pctypes, FLOW_PCTYPES_SIZE) {
+ i_setc = (u64)i40e_read_rx_ctl(hw, I40E_GLQF_HASH_INSET(0, flow_id)) |
+ ((u64)i40e_read_rx_ctl(hw, I40E_GLQF_HASH_INSET(1, flow_id)) << 32);
+ i_set = i40e_get_rss_hash_bits(&pf->hw, nfc, i_setc);
- for_each_set_bit(flow_id, flow_pctypes, FLOW_PCTYPES_SIZE) {
- i_setc = (u64)i40e_read_rx_ctl(hw, I40E_GLQF_HASH_INSET(0, flow_id)) |
- ((u64)i40e_read_rx_ctl(hw, I40E_GLQF_HASH_INSET(1, flow_id)) << 32);
- i_set = i40e_get_rss_hash_bits(&pf->hw, nfc, i_setc);
-
- i40e_write_rx_ctl(hw, I40E_GLQF_HASH_INSET(0, flow_id),
- (u32)i_set);
- i40e_write_rx_ctl(hw, I40E_GLQF_HASH_INSET(1, flow_id),
- (u32)(i_set >> 32));
- hena |= BIT_ULL(flow_id);
- }
+ i40e_write_rx_ctl(hw, I40E_GLQF_HASH_INSET(0, flow_id), (u32)i_set);
+ i40e_write_rx_ctl(hw, I40E_GLQF_HASH_INSET(1, flow_id), (u32)(i_set >> 32));
+ hena |= BIT_ULL(flow_id);
}
i40e_write_rx_ctl(hw, I40E_PFQF_HENA(0), (u32)hena);
--
2.47.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH net-next 8/9] i40e: Add missing header
2026-02-06 17:48 [PATCH net-next 0/9][pull request] Intel Wired LAN Driver Updates 2026-02-06 (libeth, ice, i40e, ixgbe) Tony Nguyen
` (6 preceding siblings ...)
2026-02-06 17:49 ` [PATCH net-next 7/9] i40e: drop useless bitmap_weight() call in i40e_set_rxfh_fields() Tony Nguyen
@ 2026-02-06 17:49 ` Tony Nguyen
2026-02-06 17:49 ` [PATCH net-next 9/9] ixgbe: refactor: use DECLARE_BITMAP for ring state field Tony Nguyen
8 siblings, 0 replies; 15+ messages in thread
From: Tony Nguyen @ 2026-02-06 17:49 UTC (permalink / raw)
To: davem, kuba, pabeni, edumazet, andrew+netdev, netdev
Cc: Andy Shevchenko, anthony.l.nguyen, Aleksandr Loktionov
From: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
When cleaning up another header I have met this build error:
drivers/net/ethernet/intel/i40e/i40e_hmc.h:105:22: error: implicit declaration of function 'upper_32_bits' [-Wimplicit-function-declaration]
105 | val1 = (u32)(upper_32_bits(pa)); \
This is due to missing header, add it to fix the possible issue.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
---
drivers/net/ethernet/intel/i40e/i40e_hmc.h | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/net/ethernet/intel/i40e/i40e_hmc.h b/drivers/net/ethernet/intel/i40e/i40e_hmc.h
index 480e3a883cc7..967711405919 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_hmc.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_hmc.h
@@ -4,6 +4,8 @@
#ifndef _I40E_HMC_H_
#define _I40E_HMC_H_
+#include <linux/wordpart.h>
+
#include "i40e_alloc.h"
#include "i40e_io.h"
#include "i40e_register.h"
--
2.47.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH net-next 9/9] ixgbe: refactor: use DECLARE_BITMAP for ring state field
2026-02-06 17:48 [PATCH net-next 0/9][pull request] Intel Wired LAN Driver Updates 2026-02-06 (libeth, ice, i40e, ixgbe) Tony Nguyen
` (7 preceding siblings ...)
2026-02-06 17:49 ` [PATCH net-next 8/9] i40e: Add missing header Tony Nguyen
@ 2026-02-06 17:49 ` Tony Nguyen
8 siblings, 0 replies; 15+ messages in thread
From: Tony Nguyen @ 2026-02-06 17:49 UTC (permalink / raw)
To: davem, kuba, pabeni, edumazet, andrew+netdev, netdev
Cc: Aleksandr Loktionov, anthony.l.nguyen, Marcin Szycik, Paul Menzel,
Rinitha S
From: Aleksandr Loktionov <aleksandr.loktionov@intel.com>
Convert the ring state field from 'unsigned long' to a proper bitmap
using DECLARE_BITMAP macro, aligning with the implementation pattern
already used in the i40e driver.
This change:
- Adds __IXGBE_RING_STATE_NBITS as the bitmap size sentinel to enum
ixgbe_ring_state_t (consistent with i40e's __I40E_RING_STATE_NBITS)
- Changes 'unsigned long state' to 'DECLARE_BITMAP(state,
__IXGBE_RING_STATE_NBITS)' in struct ixgbe_ring
- Removes the address-of operator (&) when passing ring->state to bit
manipulation functions, as bitmap arrays naturally decay to pointers
The change maintains functional equivalence while using the
more appropriate kernel bitmap API, consistent with other Intel Ethernet
drivers.
Signed-off-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com>
Reviewed-by: Marcin Szycik <marcin.szycik@linux.intel.com>
Reviewed-by: Paul Menzel <pmenzel@molgen.mpg.de>
Tested-by: Rinitha S <sx.rinitha@intel.com> (A Contingent worker at Intel)
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
---
drivers/net/ethernet/intel/ixgbe/ixgbe.h | 27 ++++-----
drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c | 4 +-
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 56 +++++++++----------
drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c | 2 +-
4 files changed, 45 insertions(+), 44 deletions(-)
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe.h b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
index dce4936708eb..59a1cee40b43 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe.h
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
@@ -322,10 +322,11 @@ enum ixgbe_ring_state_t {
__IXGBE_HANG_CHECK_ARMED,
__IXGBE_TX_XDP_RING,
__IXGBE_TX_DISABLED,
+ __IXGBE_RING_STATE_NBITS, /* must be last */
};
#define ring_uses_build_skb(ring) \
- test_bit(__IXGBE_RX_BUILD_SKB_ENABLED, &(ring)->state)
+ test_bit(__IXGBE_RX_BUILD_SKB_ENABLED, (ring)->state)
struct ixgbe_fwd_adapter {
unsigned long active_vlans[BITS_TO_LONGS(VLAN_N_VID)];
@@ -336,23 +337,23 @@ struct ixgbe_fwd_adapter {
};
#define check_for_tx_hang(ring) \
- test_bit(__IXGBE_TX_DETECT_HANG, &(ring)->state)
+ test_bit(__IXGBE_TX_DETECT_HANG, (ring)->state)
#define set_check_for_tx_hang(ring) \
- set_bit(__IXGBE_TX_DETECT_HANG, &(ring)->state)
+ set_bit(__IXGBE_TX_DETECT_HANG, (ring)->state)
#define clear_check_for_tx_hang(ring) \
- clear_bit(__IXGBE_TX_DETECT_HANG, &(ring)->state)
+ clear_bit(__IXGBE_TX_DETECT_HANG, (ring)->state)
#define ring_is_rsc_enabled(ring) \
- test_bit(__IXGBE_RX_RSC_ENABLED, &(ring)->state)
+ test_bit(__IXGBE_RX_RSC_ENABLED, (ring)->state)
#define set_ring_rsc_enabled(ring) \
- set_bit(__IXGBE_RX_RSC_ENABLED, &(ring)->state)
+ set_bit(__IXGBE_RX_RSC_ENABLED, (ring)->state)
#define clear_ring_rsc_enabled(ring) \
- clear_bit(__IXGBE_RX_RSC_ENABLED, &(ring)->state)
+ clear_bit(__IXGBE_RX_RSC_ENABLED, (ring)->state)
#define ring_is_xdp(ring) \
- test_bit(__IXGBE_TX_XDP_RING, &(ring)->state)
+ test_bit(__IXGBE_TX_XDP_RING, (ring)->state)
#define set_ring_xdp(ring) \
- set_bit(__IXGBE_TX_XDP_RING, &(ring)->state)
+ set_bit(__IXGBE_TX_XDP_RING, (ring)->state)
#define clear_ring_xdp(ring) \
- clear_bit(__IXGBE_TX_XDP_RING, &(ring)->state)
+ clear_bit(__IXGBE_TX_XDP_RING, (ring)->state)
struct ixgbe_ring {
struct ixgbe_ring *next; /* pointer to next ring in q_vector */
struct ixgbe_q_vector *q_vector; /* backpointer to host q_vector */
@@ -364,7 +365,7 @@ struct ixgbe_ring {
struct ixgbe_tx_buffer *tx_buffer_info;
struct ixgbe_rx_buffer *rx_buffer_info;
};
- unsigned long state;
+ DECLARE_BITMAP(state, __IXGBE_RING_STATE_NBITS);
u8 __iomem *tail;
dma_addr_t dma; /* phys. address of descriptor ring */
unsigned int size; /* length in bytes */
@@ -453,7 +454,7 @@ struct ixgbe_ring_feature {
*/
static inline unsigned int ixgbe_rx_bufsz(struct ixgbe_ring *ring)
{
- if (test_bit(__IXGBE_RX_3K_BUFFER, &ring->state))
+ if (test_bit(__IXGBE_RX_3K_BUFFER, ring->state))
return IXGBE_RXBUFFER_3K;
#if (PAGE_SIZE < 8192)
if (ring_uses_build_skb(ring))
@@ -465,7 +466,7 @@ static inline unsigned int ixgbe_rx_bufsz(struct ixgbe_ring *ring)
static inline unsigned int ixgbe_rx_pg_order(struct ixgbe_ring *ring)
{
#if (PAGE_SIZE < 8192)
- if (test_bit(__IXGBE_RX_3K_BUFFER, &ring->state))
+ if (test_bit(__IXGBE_RX_3K_BUFFER, ring->state))
return 1;
#endif
return 0;
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
index a1d04914fbbc..b5c85c567212 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
@@ -979,7 +979,7 @@ static int ixgbe_alloc_q_vector(struct ixgbe_adapter *adapter,
* can be marked as checksum errors.
*/
if (adapter->hw.mac.type == ixgbe_mac_82599EB)
- set_bit(__IXGBE_RX_CSUM_UDP_ZERO_ERR, &ring->state);
+ set_bit(__IXGBE_RX_CSUM_UDP_ZERO_ERR, ring->state);
#ifdef IXGBE_FCOE
if (adapter->netdev->fcoe_mtu) {
@@ -987,7 +987,7 @@ static int ixgbe_alloc_q_vector(struct ixgbe_adapter *adapter,
f = &adapter->ring_feature[RING_F_FCOE];
if ((rxr_idx >= f->offset) &&
(rxr_idx < f->offset + f->indices))
- set_bit(__IXGBE_RX_FCOE, &ring->state);
+ set_bit(__IXGBE_RX_FCOE, ring->state);
}
#endif /* IXGBE_FCOE */
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index c58051e4350b..d50f8558803a 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -968,7 +968,7 @@ static void ixgbe_update_xoff_rx_lfc(struct ixgbe_adapter *adapter)
for (i = 0; i < adapter->num_tx_queues; i++)
clear_bit(__IXGBE_HANG_CHECK_ARMED,
- &adapter->tx_ring[i]->state);
+ adapter->tx_ring[i]->state);
}
static void ixgbe_update_xoff_received(struct ixgbe_adapter *adapter)
@@ -1011,7 +1011,7 @@ static void ixgbe_update_xoff_received(struct ixgbe_adapter *adapter)
tc = tx_ring->dcb_tc;
if (xoff[tc])
- clear_bit(__IXGBE_HANG_CHECK_ARMED, &tx_ring->state);
+ clear_bit(__IXGBE_HANG_CHECK_ARMED, tx_ring->state);
}
for (i = 0; i < adapter->num_xdp_queues; i++) {
@@ -1019,7 +1019,7 @@ static void ixgbe_update_xoff_received(struct ixgbe_adapter *adapter)
tc = xdp_ring->dcb_tc;
if (xoff[tc])
- clear_bit(__IXGBE_HANG_CHECK_ARMED, &xdp_ring->state);
+ clear_bit(__IXGBE_HANG_CHECK_ARMED, xdp_ring->state);
}
}
@@ -1103,11 +1103,11 @@ static bool ixgbe_check_tx_hang(struct ixgbe_ring *tx_ring)
if (tx_done_old == tx_done && tx_pending)
/* make sure it is true for two checks in a row */
return test_and_set_bit(__IXGBE_HANG_CHECK_ARMED,
- &tx_ring->state);
+ tx_ring->state);
/* update completed stats and continue */
tx_ring->tx_stats.tx_done_old = tx_done;
/* reset the countdown */
- clear_bit(__IXGBE_HANG_CHECK_ARMED, &tx_ring->state);
+ clear_bit(__IXGBE_HANG_CHECK_ARMED, tx_ring->state);
return false;
}
@@ -1660,7 +1660,7 @@ static inline bool ixgbe_rx_is_fcoe(struct ixgbe_ring *ring,
{
__le16 pkt_info = rx_desc->wb.lower.lo_dword.hs_rss.pkt_info;
- return test_bit(__IXGBE_RX_FCOE, &ring->state) &&
+ return test_bit(__IXGBE_RX_FCOE, ring->state) &&
((pkt_info & cpu_to_le16(IXGBE_RXDADV_PKTTYPE_ETQF_MASK)) ==
(cpu_to_le16(IXGBE_ETQF_FILTER_FCOE <<
IXGBE_RXDADV_PKTTYPE_ETQF_SHIFT)));
@@ -1708,7 +1708,7 @@ static inline void ixgbe_rx_checksum(struct ixgbe_ring *ring,
* checksum errors.
*/
if ((pkt_info & cpu_to_le16(IXGBE_RXDADV_PKTTYPE_UDP)) &&
- test_bit(__IXGBE_RX_CSUM_UDP_ZERO_ERR, &ring->state))
+ test_bit(__IXGBE_RX_CSUM_UDP_ZERO_ERR, ring->state))
return;
ring->rx_stats.csum_err++;
@@ -3526,7 +3526,7 @@ static irqreturn_t ixgbe_msix_other(int irq, void *data)
for (i = 0; i < adapter->num_tx_queues; i++) {
struct ixgbe_ring *ring = adapter->tx_ring[i];
if (test_and_clear_bit(__IXGBE_TX_FDIR_INIT_DONE,
- &ring->state))
+ ring->state))
reinit_count++;
}
if (reinit_count) {
@@ -3952,13 +3952,13 @@ void ixgbe_configure_tx_ring(struct ixgbe_adapter *adapter,
if (adapter->flags & IXGBE_FLAG_FDIR_HASH_CAPABLE) {
ring->atr_sample_rate = adapter->atr_sample_rate;
ring->atr_count = 0;
- set_bit(__IXGBE_TX_FDIR_INIT_DONE, &ring->state);
+ set_bit(__IXGBE_TX_FDIR_INIT_DONE, ring->state);
} else {
ring->atr_sample_rate = 0;
}
/* initialize XPS */
- if (!test_and_set_bit(__IXGBE_TX_XPS_INIT_DONE, &ring->state)) {
+ if (!test_and_set_bit(__IXGBE_TX_XPS_INIT_DONE, ring->state)) {
struct ixgbe_q_vector *q_vector = ring->q_vector;
if (q_vector)
@@ -3967,7 +3967,7 @@ void ixgbe_configure_tx_ring(struct ixgbe_adapter *adapter,
ring->queue_index);
}
- clear_bit(__IXGBE_HANG_CHECK_ARMED, &ring->state);
+ clear_bit(__IXGBE_HANG_CHECK_ARMED, ring->state);
/* reinitialize tx_buffer_info */
memset(ring->tx_buffer_info, 0,
@@ -4173,7 +4173,7 @@ static void ixgbe_configure_srrctl(struct ixgbe_adapter *adapter,
srrctl |= PAGE_SIZE >> IXGBE_SRRCTL_BSIZEPKT_SHIFT;
else
srrctl |= xsk_buf_len >> IXGBE_SRRCTL_BSIZEPKT_SHIFT;
- } else if (test_bit(__IXGBE_RX_3K_BUFFER, &rx_ring->state)) {
+ } else if (test_bit(__IXGBE_RX_3K_BUFFER, rx_ring->state)) {
srrctl |= IXGBE_RXBUFFER_3K >> IXGBE_SRRCTL_BSIZEPKT_SHIFT;
} else {
srrctl |= IXGBE_RXBUFFER_2K >> IXGBE_SRRCTL_BSIZEPKT_SHIFT;
@@ -4558,7 +4558,7 @@ void ixgbe_configure_rx_ring(struct ixgbe_adapter *adapter,
* higher than the MTU of the PF.
*/
if (ring_uses_build_skb(ring) &&
- !test_bit(__IXGBE_RX_3K_BUFFER, &ring->state))
+ !test_bit(__IXGBE_RX_3K_BUFFER, ring->state))
rxdctl |= IXGBE_MAX_2K_FRAME_BUILD_SKB |
IXGBE_RXDCTL_RLPML_EN;
#endif
@@ -4733,27 +4733,27 @@ static void ixgbe_set_rx_buffer_len(struct ixgbe_adapter *adapter)
rx_ring = adapter->rx_ring[i];
clear_ring_rsc_enabled(rx_ring);
- clear_bit(__IXGBE_RX_3K_BUFFER, &rx_ring->state);
- clear_bit(__IXGBE_RX_BUILD_SKB_ENABLED, &rx_ring->state);
+ clear_bit(__IXGBE_RX_3K_BUFFER, rx_ring->state);
+ clear_bit(__IXGBE_RX_BUILD_SKB_ENABLED, rx_ring->state);
if (adapter->flags2 & IXGBE_FLAG2_RSC_ENABLED)
set_ring_rsc_enabled(rx_ring);
- if (test_bit(__IXGBE_RX_FCOE, &rx_ring->state))
- set_bit(__IXGBE_RX_3K_BUFFER, &rx_ring->state);
+ if (test_bit(__IXGBE_RX_FCOE, rx_ring->state))
+ set_bit(__IXGBE_RX_3K_BUFFER, rx_ring->state);
if (adapter->flags2 & IXGBE_FLAG2_RX_LEGACY)
continue;
- set_bit(__IXGBE_RX_BUILD_SKB_ENABLED, &rx_ring->state);
+ set_bit(__IXGBE_RX_BUILD_SKB_ENABLED, rx_ring->state);
#if (PAGE_SIZE < 8192)
if (adapter->flags2 & IXGBE_FLAG2_RSC_ENABLED)
- set_bit(__IXGBE_RX_3K_BUFFER, &rx_ring->state);
+ set_bit(__IXGBE_RX_3K_BUFFER, rx_ring->state);
if (IXGBE_2K_TOO_SMALL_WITH_PADDING ||
(max_frame > (ETH_FRAME_LEN + ETH_FCS_LEN)))
- set_bit(__IXGBE_RX_3K_BUFFER, &rx_ring->state);
+ set_bit(__IXGBE_RX_3K_BUFFER, rx_ring->state);
#endif
}
}
@@ -7946,10 +7946,10 @@ static void ixgbe_fdir_reinit_subtask(struct ixgbe_adapter *adapter)
if (ixgbe_reinit_fdir_tables_82599(hw) == 0) {
for (i = 0; i < adapter->num_tx_queues; i++)
set_bit(__IXGBE_TX_FDIR_INIT_DONE,
- &(adapter->tx_ring[i]->state));
+ adapter->tx_ring[i]->state);
for (i = 0; i < adapter->num_xdp_queues; i++)
set_bit(__IXGBE_TX_FDIR_INIT_DONE,
- &adapter->xdp_ring[i]->state);
+ adapter->xdp_ring[i]->state);
/* re-enable flow director interrupts */
IXGBE_WRITE_REG(hw, IXGBE_EIMS, IXGBE_EIMS_FLOW_DIR);
} else {
@@ -9490,7 +9490,7 @@ netdev_tx_t ixgbe_xmit_frame_ring(struct sk_buff *skb,
ixgbe_tx_csum(tx_ring, first, &ipsec_tx);
/* add the ATR filter if ATR is on */
- if (test_bit(__IXGBE_TX_FDIR_INIT_DONE, &tx_ring->state))
+ if (test_bit(__IXGBE_TX_FDIR_INIT_DONE, tx_ring->state))
ixgbe_atr(tx_ring, first);
#ifdef IXGBE_FCOE
@@ -9530,7 +9530,7 @@ static netdev_tx_t __ixgbe_xmit_frame(struct sk_buff *skb,
return NETDEV_TX_OK;
tx_ring = ring ? ring : adapter->tx_ring[skb_get_queue_mapping(skb)];
- if (unlikely(test_bit(__IXGBE_TX_DISABLED, &tx_ring->state)))
+ if (unlikely(test_bit(__IXGBE_TX_DISABLED, tx_ring->state)))
return NETDEV_TX_BUSY;
return ixgbe_xmit_frame_ring(skb, adapter, tx_ring);
@@ -11015,7 +11015,7 @@ static int ixgbe_xdp_xmit(struct net_device *dev, int n,
if (unlikely(!ring))
return -ENXIO;
- if (unlikely(test_bit(__IXGBE_TX_DISABLED, &ring->state)))
+ if (unlikely(test_bit(__IXGBE_TX_DISABLED, ring->state)))
return -ENXIO;
if (static_branch_unlikely(&ixgbe_xdp_locking_key))
@@ -11121,7 +11121,7 @@ static void ixgbe_disable_txr_hw(struct ixgbe_adapter *adapter,
static void ixgbe_disable_txr(struct ixgbe_adapter *adapter,
struct ixgbe_ring *tx_ring)
{
- set_bit(__IXGBE_TX_DISABLED, &tx_ring->state);
+ set_bit(__IXGBE_TX_DISABLED, tx_ring->state);
ixgbe_disable_txr_hw(adapter, tx_ring);
}
@@ -11275,9 +11275,9 @@ void ixgbe_txrx_ring_enable(struct ixgbe_adapter *adapter, int ring)
ixgbe_configure_tx_ring(adapter, xdp_ring);
ixgbe_configure_rx_ring(adapter, rx_ring);
- clear_bit(__IXGBE_TX_DISABLED, &tx_ring->state);
+ clear_bit(__IXGBE_TX_DISABLED, tx_ring->state);
if (xdp_ring)
- clear_bit(__IXGBE_TX_DISABLED, &xdp_ring->state);
+ clear_bit(__IXGBE_TX_DISABLED, xdp_ring->state);
/* Rx/Tx/XDP Tx share the same napi context. */
napi_enable(&rx_ring->q_vector->napi);
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
index 7b941505a9d0..89f96c463f02 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
@@ -524,7 +524,7 @@ int ixgbe_xsk_wakeup(struct net_device *dev, u32 qid, u32 flags)
ring = adapter->xdp_ring[qid];
- if (test_bit(__IXGBE_TX_DISABLED, &ring->state))
+ if (test_bit(__IXGBE_TX_DISABLED, ring->state))
return -ENETDOWN;
if (!ring->xsk_pool)
--
2.47.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [net-next,3/9] ice: migrate to netdev ops lock
2026-02-06 17:49 ` [PATCH net-next 3/9] ice: migrate to netdev ops lock Tony Nguyen
@ 2026-02-11 4:24 ` Jakub Kicinski
2026-02-11 13:51 ` Alexander Lobakin
0 siblings, 1 reply; 15+ messages in thread
From: Jakub Kicinski @ 2026-02-11 4:24 UTC (permalink / raw)
To: anthony.l.nguyen
Cc: Jakub Kicinski, netdev, sdf, andrew+netdev, ast, sx.rinitha,
horms, yury.norov, john.fastabend, kohei, przemyslaw.kitszel,
richardcochran, alexander.nowlin, daniel, maciej.fijalkowski,
nxne.cnse.osdt.itp.upstreaming, edumazet, aleksandr.loktionov,
marcin.szycik, hawk, jacob.e.keller, magnus.karlsson, pmenzel,
pabeni, bpf, davem, andriy.shevchenko, aleksander.lobakin
This is an AI-generated review of your patch. The human sending this
email has considered the AI review valid, or at least plausible.
For local reproduction steps see https://netdev-ai.bots.linux.dev/ai-local.html
---
ice: migrate to netdev ops lock
This commit migrates the ice driver to use the netdev ops lock by
requesting ops locking and using _locked counterparts of NAPI
configuration functions to avoid deadlocks during suspend/resume, queue
rebuild and resets.
> diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
> --- a/drivers/net/ethernet/intel/ice/ice_lib.c
> +++ b/drivers/net/ethernet/intel/ice/ice_lib.c
[ ... ]
> @@ -2308,10 +2308,14 @@ static int ice_vsi_cfg_tc_lan(struct ice_pf *pf, struct ice_vsi *vsi)
> }
>
> /**
> - * ice_vsi_cfg_def - configure default VSI based on the type
> + * ice_vsi_cfg_def_locked - configure default VSI based on the type
> * @vsi: pointer to VSI
> + *
> + * Should be called only with the netdev lock taken.
> + *
> + * Return: 0 on success, -errno on failure.
> */
> -static int ice_vsi_cfg_def(struct ice_vsi *vsi)
> +static int ice_vsi_cfg_def_locked(struct ice_vsi *vsi)
[ ... ]
> +unroll_vector_base:
> + /* reclaim SW interrupts back to the common pool */
> +unroll_alloc_q_vector:
> + ice_vsi_free_q_vectors(vsi);
Should this be ice_vsi_free_q_vectors_locked(vsi)?
ice_vsi_cfg_def_locked() is called with the netdev lock held (either by
ice_vsi_cfg_def() wrapper or directly from callers like
ice_vsi_rebuild_locked()). The error path here calls ice_vsi_free_q_vectors()
which tries to acquire the netdev lock:
ice_vsi_cfg_def_locked() [netdev lock held]
-> ice_vsi_free_q_vectors()
-> netdev_lock(dev) /* deadlock - already held */
This would deadlock when an error triggers unroll_vector_base or
unroll_alloc_q_vector paths (e.g., memory allocation failure in
ice_vsi_alloc_rings() or ice_vsi_alloc_ring_stats()).
[ ... ]
> @@ -2703,7 +2748,7 @@ void ice_vsi_close(struct ice_vsi *vsi)
> {
> if (!test_and_set_bit(ICE_VSI_DOWN, vsi->state))
> ice_down(vsi);
>
> - ice_vsi_clear_napi_queues(vsi);
> + ice_vsi_clear_napi_queues_locked(vsi);
ice_vsi_clear_napi_queues_locked() expects the netdev lock to be held.
Are all callers of ice_vsi_close() holding the netdev lock?
Looking at ice_sf_dev_remove():
ice_sf_dev_remove()
-> ice_vsi_close(vsi) /* no netdev_lock held */
-> ice_sf_decfg_netdev() /* unregisters netdev later */
And ice_dis_vsi() when !netif_running(vsi->netdev):
ice_dis_vsi(vsi, false)
-> ice_vsi_close(vsi) /* rtnl_lock only taken when netif_running */
Both paths call ice_vsi_close() without holding the netdev lock while
the netdev is still registered, which could trigger
netdev_assert_locked_or_invisible warnings.
[ ... ]
> @@ -3069,16 +3152,17 @@
> }
>
> /**
> - * ice_vsi_rebuild - Rebuild VSI after reset
> + * ice_vsi_rebuild_locked - Rebuild VSI after reset
> * @vsi: VSI to be rebuild
> * @vsi_flags: flags used for VSI rebuild flow
> *
> * Set vsi_flags to ICE_VSI_FLAG_INIT to initialize a new VSI, or
> * ICE_VSI_FLAG_NO_INIT to rebuild an existing VSI in hardware.
> + * Should be called only under the netdev lock.
> *
> * Returns 0 on success and negative value on failure
> */
> -int ice_vsi_rebuild(struct ice_vsi *vsi, u32 vsi_flags)
> +int ice_vsi_rebuild_locked(struct ice_vsi *vsi, u32 vsi_flags)
[ ... ]
> +free_coalesce:
> + kfree(coalesce);
> +decfg:
> + if (ret)
> + ice_vsi_decfg(vsi);
Should this be ice_vsi_decfg_locked(vsi)?
ice_vsi_rebuild_locked() is called with the netdev lock already held
(either by the ice_vsi_rebuild() wrapper or by callers like
ice_vsi_recfg_qs()). The error path at label 'decfg:' calls ice_vsi_decfg()
which tries to acquire the lock again:
ice_vsi_rebuild_locked() [netdev lock held]
-> ice_vsi_decfg()
-> netdev_lock(dev) /* deadlock - already held */
This would deadlock when an error occurs after ice_vsi_cfg_def_locked()
succeeds but a later operation fails.
--
pw-bot: cr
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [net-next,3/9] ice: migrate to netdev ops lock
2026-02-11 4:24 ` [net-next,3/9] " Jakub Kicinski
@ 2026-02-11 13:51 ` Alexander Lobakin
2026-02-11 16:55 ` Jakub Kicinski
0 siblings, 1 reply; 15+ messages in thread
From: Alexander Lobakin @ 2026-02-11 13:51 UTC (permalink / raw)
To: Jakub Kicinski
Cc: anthony.l.nguyen, netdev, sdf, andrew+netdev, ast, sx.rinitha,
horms, yury.norov, john.fastabend, kohei, przemyslaw.kitszel,
richardcochran, alexander.nowlin, daniel, maciej.fijalkowski,
nxne.cnse.osdt.itp.upstreaming, edumazet, aleksandr.loktionov,
marcin.szycik, hawk, jacob.e.keller, magnus.karlsson, pmenzel,
pabeni, bpf, davem, andriy.shevchenko
From: Jakub Kicinski <kuba@kernel.org>
Date: Tue, 10 Feb 2026 20:24:58 -0800
Hi,
> This is an AI-generated review of your patch. The human sending this
> email has considered the AI review valid, or at least plausible.
>
> For local reproduction steps see https://netdev-ai.bots.linux.dev/ai-local.html
> ---
> ice: migrate to netdev ops lock
>
> This commit migrates the ice driver to use the netdev ops lock by
> requesting ops locking and using _locked counterparts of NAPI
> configuration functions to avoid deadlocks during suspend/resume, queue
> rebuild and resets.
>
>> diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
>> --- a/drivers/net/ethernet/intel/ice/ice_lib.c
>> +++ b/drivers/net/ethernet/intel/ice/ice_lib.c
>
> [ ... ]
>
>> @@ -2308,10 +2308,14 @@ static int ice_vsi_cfg_tc_lan(struct ice_pf *pf, struct ice_vsi *vsi)
>> }
>>
>> /**
>> - * ice_vsi_cfg_def - configure default VSI based on the type
>> + * ice_vsi_cfg_def_locked - configure default VSI based on the type
>> * @vsi: pointer to VSI
>> + *
>> + * Should be called only with the netdev lock taken.
>> + *
>> + * Return: 0 on success, -errno on failure.
>> */
>> -static int ice_vsi_cfg_def(struct ice_vsi *vsi)
>> +static int ice_vsi_cfg_def_locked(struct ice_vsi *vsi)
>
> [ ... ]
>
>> +unroll_vector_base:
>> + /* reclaim SW interrupts back to the common pool */
>> +unroll_alloc_q_vector:
>> + ice_vsi_free_q_vectors(vsi);
>
> Should this be ice_vsi_free_q_vectors_locked(vsi)?
>
> ice_vsi_cfg_def_locked() is called with the netdev lock held (either by
> ice_vsi_cfg_def() wrapper or directly from callers like
> ice_vsi_rebuild_locked()). The error path here calls ice_vsi_free_q_vectors()
> which tries to acquire the netdev lock:
>
> ice_vsi_cfg_def_locked() [netdev lock held]
> -> ice_vsi_free_q_vectors()
> -> netdev_lock(dev) /* deadlock - already held */
>
> This would deadlock when an error triggers unroll_vector_base or
> unroll_alloc_q_vector paths (e.g., memory allocation failure in
> ice_vsi_alloc_rings() or ice_vsi_alloc_ring_stats()).
>
> [ ... ]
>
>> @@ -2703,7 +2748,7 @@ void ice_vsi_close(struct ice_vsi *vsi)
>> {
>> if (!test_and_set_bit(ICE_VSI_DOWN, vsi->state))
>> ice_down(vsi);
>>
>> - ice_vsi_clear_napi_queues(vsi);
>> + ice_vsi_clear_napi_queues_locked(vsi);
>
> ice_vsi_clear_napi_queues_locked() expects the netdev lock to be held.
> Are all callers of ice_vsi_close() holding the netdev lock?
>
> Looking at ice_sf_dev_remove():
>
> ice_sf_dev_remove()
> -> ice_vsi_close(vsi) /* no netdev_lock held */
> -> ice_sf_decfg_netdev() /* unregisters netdev later */
>
> And ice_dis_vsi() when !netif_running(vsi->netdev):
>
> ice_dis_vsi(vsi, false)
> -> ice_vsi_close(vsi) /* rtnl_lock only taken when netif_running */
>
> Both paths call ice_vsi_close() without holding the netdev lock while
> the netdev is still registered, which could trigger
> netdev_assert_locked_or_invisible warnings.
>
> [ ... ]
>
>> @@ -3069,16 +3152,17 @@
>> }
>>
>> /**
>> - * ice_vsi_rebuild - Rebuild VSI after reset
>> + * ice_vsi_rebuild_locked - Rebuild VSI after reset
>> * @vsi: VSI to be rebuild
>> * @vsi_flags: flags used for VSI rebuild flow
>> *
>> * Set vsi_flags to ICE_VSI_FLAG_INIT to initialize a new VSI, or
>> * ICE_VSI_FLAG_NO_INIT to rebuild an existing VSI in hardware.
>> + * Should be called only under the netdev lock.
>> *
>> * Returns 0 on success and negative value on failure
>> */
>> -int ice_vsi_rebuild(struct ice_vsi *vsi, u32 vsi_flags)
>> +int ice_vsi_rebuild_locked(struct ice_vsi *vsi, u32 vsi_flags)
>
> [ ... ]
>
>> +free_coalesce:
>> + kfree(coalesce);
>> +decfg:
>> + if (ret)
>> + ice_vsi_decfg(vsi);
>
> Should this be ice_vsi_decfg_locked(vsi)?
>
> ice_vsi_rebuild_locked() is called with the netdev lock already held
> (either by the ice_vsi_rebuild() wrapper or by callers like
> ice_vsi_recfg_qs()). The error path at label 'decfg:' calls ice_vsi_decfg()
> which tries to acquire the lock again:
>
> ice_vsi_rebuild_locked() [netdev lock held]
> -> ice_vsi_decfg()
> -> netdev_lock(dev) /* deadlock - already held */
>
> This would deadlock when an error occurs after ice_vsi_cfg_def_locked()
> succeeds but a later operation fails.
Tony also fed the series to AI, two times, and each time he got a
different answer.
The series was on iwl-next for 1.5 months and only one bug was reported,
which I fixed immediately.
I can take a look into this, but wouldn't be better if we take the
series now and then have 2 months to fix bugs if any appears?
Thanks,
Olek
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [net-next,3/9] ice: migrate to netdev ops lock
2026-02-11 13:51 ` Alexander Lobakin
@ 2026-02-11 16:55 ` Jakub Kicinski
2026-02-11 17:13 ` Alexander Lobakin
0 siblings, 1 reply; 15+ messages in thread
From: Jakub Kicinski @ 2026-02-11 16:55 UTC (permalink / raw)
To: Alexander Lobakin
Cc: anthony.l.nguyen, netdev, sdf, andrew+netdev, ast, sx.rinitha,
horms, yury.norov, john.fastabend, kohei, przemyslaw.kitszel,
richardcochran, alexander.nowlin, daniel, maciej.fijalkowski,
nxne.cnse.osdt.itp.upstreaming, edumazet, aleksandr.loktionov,
marcin.szycik, hawk, jacob.e.keller, magnus.karlsson, pmenzel,
pabeni, bpf, davem, andriy.shevchenko
On Wed, 11 Feb 2026 14:51:56 +0100 Alexander Lobakin wrote:
> >> +free_coalesce:
> >> + kfree(coalesce);
> >> +decfg:
> >> + if (ret)
> >> + ice_vsi_decfg(vsi);
> >
> > Should this be ice_vsi_decfg_locked(vsi)?
> >
> > ice_vsi_rebuild_locked() is called with the netdev lock already held
> > (either by the ice_vsi_rebuild() wrapper or by callers like
> > ice_vsi_recfg_qs()). The error path at label 'decfg:' calls ice_vsi_decfg()
> > which tries to acquire the lock again:
> >
> > ice_vsi_rebuild_locked() [netdev lock held]
> > -> ice_vsi_decfg()
> > -> netdev_lock(dev) /* deadlock - already held */
> >
> > This would deadlock when an error occurs after ice_vsi_cfg_def_locked()
> > succeeds but a later operation fails.
>
> Tony also fed the series to AI, two times, and each time he got a
> different answer.
> The series was on iwl-next for 1.5 months and only one bug was reported,
> which I fixed immediately.
>
> I can take a look into this, but wouldn't be better if we take the
> series now and then have 2 months to fix bugs if any appears?
I understand the frustration, but unless the review is a false positive
I don't see how we can ignore it.
Looking at the PR rate from Intel I suspect you may be better off
pointing your frustration at the internal process? There seems to
be a net-next PR every 2 weeks in 2026. It's not like the 1.5 mo
was spent waiting on upstream?
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [net-next,3/9] ice: migrate to netdev ops lock
2026-02-11 16:55 ` Jakub Kicinski
@ 2026-02-11 17:13 ` Alexander Lobakin
2026-02-11 18:46 ` Jacob Keller
0 siblings, 1 reply; 15+ messages in thread
From: Alexander Lobakin @ 2026-02-11 17:13 UTC (permalink / raw)
To: Jakub Kicinski
Cc: anthony.l.nguyen, netdev, sdf, andrew+netdev, ast, sx.rinitha,
horms, yury.norov, john.fastabend, kohei, przemyslaw.kitszel,
richardcochran, alexander.nowlin, daniel, maciej.fijalkowski,
nxne.cnse.osdt.itp.upstreaming, edumazet, aleksandr.loktionov,
marcin.szycik, hawk, jacob.e.keller, magnus.karlsson, pmenzel,
pabeni, bpf, davem, andriy.shevchenko
From: Jakub Kicinski <kuba@kernel.org>
Date: Wed, 11 Feb 2026 08:55:14 -0800
> On Wed, 11 Feb 2026 14:51:56 +0100 Alexander Lobakin wrote:
>>>> +free_coalesce:
>>>> + kfree(coalesce);
>>>> +decfg:
>>>> + if (ret)
>>>> + ice_vsi_decfg(vsi);
>>>
>>> Should this be ice_vsi_decfg_locked(vsi)?
>>>
>>> ice_vsi_rebuild_locked() is called with the netdev lock already held
>>> (either by the ice_vsi_rebuild() wrapper or by callers like
>>> ice_vsi_recfg_qs()). The error path at label 'decfg:' calls ice_vsi_decfg()
>>> which tries to acquire the lock again:
>>>
>>> ice_vsi_rebuild_locked() [netdev lock held]
>>> -> ice_vsi_decfg()
>>> -> netdev_lock(dev) /* deadlock - already held */
>>>
>>> This would deadlock when an error occurs after ice_vsi_cfg_def_locked()
>>> succeeds but a later operation fails.
>>
>> Tony also fed the series to AI, two times, and each time he got a
>> different answer.
>> The series was on iwl-next for 1.5 months and only one bug was reported,
>> which I fixed immediately.
>>
>> I can take a look into this, but wouldn't be better if we take the
>> series now and then have 2 months to fix bugs if any appears?
>
> I understand the frustration, but unless the review is a false positive
> I don't see how we can ignore it.
>
> Looking at the PR rate from Intel I suspect you may be better off
> pointing your frustration at the internal process? There seems to
> be a net-next PR every 2 weeks in 2026. It's not like the 1.5 mo
> was spent waiting on upstream?
I'm not frustrated at all :> I mentioned those 1.5 months only to say
that during that time, none of the users of our tree faced any bugs,
except one case which was fixed a long ago.
Anyway, this last PR was just a try to squeeze in at the end of the
window to compensate that it took a bit too long for the val to test
the series. It's not a problem at all if it won't.
Maybe I'll even add support for zcrx large buffers in meantime instead
of sending a followup later.
I can't say it's a false positive, but I can't confirm the AI is
absolutely correct here. At least a couple places mentioned that
"shouldn't work" in fact works. It's not just me being Sarah Connor when
it comes to AI, I just don't want people to trust it too much or authors
to waste time proving that AI is wrong.
Your call whether to drop it or take (this PR also contains several
patches not related to netmem, I think they shouldn't be dropped?).
Thanks,
Olek
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [net-next,3/9] ice: migrate to netdev ops lock
2026-02-11 17:13 ` Alexander Lobakin
@ 2026-02-11 18:46 ` Jacob Keller
0 siblings, 0 replies; 15+ messages in thread
From: Jacob Keller @ 2026-02-11 18:46 UTC (permalink / raw)
To: Alexander Lobakin, Jakub Kicinski
Cc: anthony.l.nguyen, netdev, sdf, andrew+netdev, ast, sx.rinitha,
horms, yury.norov, john.fastabend, kohei, przemyslaw.kitszel,
richardcochran, alexander.nowlin, daniel, maciej.fijalkowski,
nxne.cnse.osdt.itp.upstreaming, edumazet, aleksandr.loktionov,
marcin.szycik, hawk, magnus.karlsson, pmenzel, pabeni, bpf, davem,
andriy.shevchenko
On 2/11/2026 9:13 AM, Alexander Lobakin wrote:
> I'm not frustrated at all :> I mentioned those 1.5 months only to say
> that during that time, none of the users of our tree faced any bugs,
> except one case which was fixed a long ago.
>
I wouldn't trust an absence of reports in the case of subtle locking
issues like this. At least one of the reports here is obviously valid:
ice_vsi_cfg_def_locked holds the netdev lock but calls
ice_vsi_free_q_vectors in its goto cleanup logic.
Its a mistake in cleanup flow which is likely untested. Of course users
haven't reproduced this because they haven't managed to get a failure
that would trigger a cleanup.
> I can't say it's a false positive, but I can't confirm the AI is
> absolutely correct here. At least a couple places mentioned that
> "shouldn't work" in fact works. It's not just me being Sarah Connor when
> it comes to AI, I just don't want people to trust it too much or authors
> to waste time proving that AI is wrong.
>
I agree it is important to take the AI report with a grain of salt, and
it is incredibly frustrating when you see a bogus report that
hallucinated some data.
Unfortunately, in this case its a real (if extremely unlikely to trigger
in practice) issue.
> Your call whether to drop it or take (this PR also contains several
> patches not related to netmem, I think they shouldn't be dropped?).
>
> Thanks,
> Olek
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2026-02-11 18:46 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-06 17:48 [PATCH net-next 0/9][pull request] Intel Wired LAN Driver Updates 2026-02-06 (libeth, ice, i40e, ixgbe) Tony Nguyen
2026-02-06 17:48 ` [PATCH net-next 1/9] libeth: pass Rx queue index to PP when creating a fill queue Tony Nguyen
2026-02-06 17:48 ` [PATCH net-next 2/9] libeth: handle creating pools with unreadable buffers Tony Nguyen
2026-02-06 17:49 ` [PATCH net-next 3/9] ice: migrate to netdev ops lock Tony Nguyen
2026-02-11 4:24 ` [net-next,3/9] " Jakub Kicinski
2026-02-11 13:51 ` Alexander Lobakin
2026-02-11 16:55 ` Jakub Kicinski
2026-02-11 17:13 ` Alexander Lobakin
2026-02-11 18:46 ` Jacob Keller
2026-02-06 17:49 ` [PATCH net-next 4/9] ice: implement Rx queue management ops Tony Nguyen
2026-02-06 17:49 ` [PATCH net-next 5/9] ice: add support for transmitting unreadable frags Tony Nguyen
2026-02-06 17:49 ` [PATCH net-next 6/9] ice: Make name member of struct ice_cgu_pin_desc const Tony Nguyen
2026-02-06 17:49 ` [PATCH net-next 7/9] i40e: drop useless bitmap_weight() call in i40e_set_rxfh_fields() Tony Nguyen
2026-02-06 17:49 ` [PATCH net-next 8/9] i40e: Add missing header Tony Nguyen
2026-02-06 17:49 ` [PATCH net-next 9/9] ixgbe: refactor: use DECLARE_BITMAP for ring state field Tony Nguyen
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox