* [Intel-wired-lan] [PATCH iwl-net v3 0/6] idpf: replace Tx flow scheduling buffer ring with buffer pool
@ 2025-07-25 18:42 Joshua Hay
2025-07-25 18:42 ` [Intel-wired-lan] [PATCH iwl-net v3 1/6] idpf: add support for Tx refillqs in flow scheduling mode Joshua Hay
` (5 more replies)
0 siblings, 6 replies; 20+ messages in thread
From: Joshua Hay @ 2025-07-25 18:42 UTC (permalink / raw)
To: intel-wired-lan; +Cc: netdev, Joshua Hay
This series fixes a stability issue in the flow scheduling Tx send/clean
path that results in a Tx timeout.
The existing guardrails in the Tx path were not sufficient to prevent
the driver from reusing completion tags that were still in flight (held
by the HW). This collision would cause the driver to erroneously clean
the wrong packet thus leaving the descriptor ring in a bad state.
The main point of this fix is to replace the flow scheduling buffer ring
with a large pool/array of buffers. The completion tag then simply is
the index into this array. The driver tracks the free tags and pulls
the next free one from a refillq. The cleaning routines simply use the
completion tag from the completion descriptor to index into the array to
quickly find the buffers to clean.
All of the code to support the refactor is added first to ensure traffic
still passes with each patch. The final patch then removes all of the
obsolete stashing code.
---
v3:
- Remove unreachable code in patch 4
- Update comment format
v2:
https://lore.kernel.org/intel-wired-lan/20250718002150.2724409-1-joshua.a.hay@intel.com/T/#t
v1:
https://lore.kernel.org/intel-wired-lan/c6444d15-bc20-41a8-9230-9bb266cb2ac6@molgen.mpg.de/T/#maf9f464c598951ee860e5dd24ef8a451a488c5a0
Joshua Hay (6):
idpf: add support for Tx refillqs in flow scheduling mode
idpf: improve when to set RE bit logic
idpf: simplify and fix splitq Tx packet rollback error path
idpf: replace flow scheduling buffer ring with buffer pool
idpf: stop Tx if there are insufficient buffer resources
idpf: remove obsolete stashing code
.../ethernet/intel/idpf/idpf_singleq_txrx.c | 61 +-
drivers/net/ethernet/intel/idpf/idpf_txrx.c | 722 +++++++-----------
drivers/net/ethernet/intel/idpf/idpf_txrx.h | 87 +--
3 files changed, 355 insertions(+), 515 deletions(-)
--
2.39.2
^ permalink raw reply [flat|nested] 20+ messages in thread
* [Intel-wired-lan] [PATCH iwl-net v3 1/6] idpf: add support for Tx refillqs in flow scheduling mode
2025-07-25 18:42 [Intel-wired-lan] [PATCH iwl-net v3 0/6] idpf: replace Tx flow scheduling buffer ring with buffer pool Joshua Hay
@ 2025-07-25 18:42 ` Joshua Hay
2025-07-28 17:06 ` Paul Menzel
2025-08-12 16:45 ` Salin, Samuel
2025-07-25 18:42 ` [Intel-wired-lan] [PATCH iwl-net v3 2/6] idpf: improve when to set RE bit logic Joshua Hay
` (4 subsequent siblings)
5 siblings, 2 replies; 20+ messages in thread
From: Joshua Hay @ 2025-07-25 18:42 UTC (permalink / raw)
To: intel-wired-lan; +Cc: netdev, Joshua Hay, Madhu Chittim
In certain production environments, it is possible for completion tags
to collide, meaning N packets with the same completion tag are in flight
at the same time. In this environment, any given Tx queue is effectively
used to send both slower traffic and higher throughput traffic
simultaneously. This is the result of a customer's specific
configuration in the device pipeline, the details of which Intel cannot
provide. This configuration results in a small number of out-of-order
completions, i.e., a small number of packets in flight. The existing
guardrails in the driver only protect against a large number of packets
in flight. The slower flow completions are delayed which causes the
out-of-order completions. The fast flow will continue sending traffic
and generating tags. Because tags are generated on the fly, the fast
flow eventually uses the same tag for a packet that is still in flight
from the slower flow. The driver has no idea which packet it should
clean when it processes the completion with that tag, but it will look
for the packet on the buffer ring before the hash table. If the slower
flow packet completion is processed first, it will end up cleaning the
fast flow packet on the ring prematurely. This leaves the descriptor
ring in a bad state resulting in a crashes or Tx timeout.
In summary, generating a tag when a packet is sent can lead to the same
tag being associated with multiple packets. This can lead to resource
leaks, crashes, and/or Tx timeouts.
Before we can replace the tag generation, we need a new mechanism for
the send path to know what tag to use next. The driver will allocate and
initialize a refillq for each TxQ with all of the possible free tag
values. During send, the driver grabs the next free tag from the refillq
from next_to_clean. While cleaning the packet, the clean routine posts
the tag back to the refillq's next_to_use to indicate that it is now
free to use.
This mechanism works exactly the same way as the existing Rx refill
queues, which post the cleaned buffer IDs back to the buffer queue to be
reposted to HW. Since we're using the refillqs for both Rx and Tx now,
genercize some of the existing refillq support.
Note: the refillqs will not be used yet. This is only demonstrating how
they will be used to pass free tags back to the send path.
Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
Reviewed-by: Madhu Chittim <madhu.chittim@intel.com>
---
v2:
- reorder refillq init logic to reduce indentation
- don't drop skb if get free bufid fails, increment busy counter
- add missing unlikely
---
drivers/net/ethernet/intel/idpf/idpf_txrx.c | 93 +++++++++++++++++++--
drivers/net/ethernet/intel/idpf/idpf_txrx.h | 8 +-
2 files changed, 91 insertions(+), 10 deletions(-)
diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
index 5cf440e09d0a..d5908126163d 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
@@ -139,6 +139,9 @@ static void idpf_tx_desc_rel(struct idpf_tx_queue *txq)
if (!txq->desc_ring)
return;
+ if (txq->refillq)
+ kfree(txq->refillq->ring);
+
dmam_free_coherent(txq->dev, txq->size, txq->desc_ring, txq->dma);
txq->desc_ring = NULL;
txq->next_to_use = 0;
@@ -244,6 +247,7 @@ static int idpf_tx_desc_alloc(const struct idpf_vport *vport,
struct idpf_tx_queue *tx_q)
{
struct device *dev = tx_q->dev;
+ struct idpf_sw_queue *refillq;
int err;
err = idpf_tx_buf_alloc_all(tx_q);
@@ -267,6 +271,29 @@ static int idpf_tx_desc_alloc(const struct idpf_vport *vport,
tx_q->next_to_clean = 0;
idpf_queue_set(GEN_CHK, tx_q);
+ if (!idpf_queue_has(FLOW_SCH_EN, tx_q))
+ return 0;
+
+ refillq = tx_q->refillq;
+ refillq->desc_count = tx_q->desc_count;
+ refillq->ring = kcalloc(refillq->desc_count, sizeof(u32),
+ GFP_KERNEL);
+ if (!refillq->ring) {
+ err = -ENOMEM;
+ goto err_alloc;
+ }
+
+ for (u32 i = 0; i < refillq->desc_count; i++)
+ refillq->ring[i] =
+ FIELD_PREP(IDPF_RFL_BI_BUFID_M, i) |
+ FIELD_PREP(IDPF_RFL_BI_GEN_M,
+ idpf_queue_has(GEN_CHK, refillq));
+
+ /* Go ahead and flip the GEN bit since this counts as filling
+ * up the ring, i.e. we already ring wrapped.
+ */
+ idpf_queue_change(GEN_CHK, refillq);
+
return 0;
err_alloc:
@@ -603,18 +630,18 @@ static int idpf_rx_hdr_buf_alloc_all(struct idpf_buf_queue *bufq)
}
/**
- * idpf_rx_post_buf_refill - Post buffer id to refill queue
+ * idpf_post_buf_refill - Post buffer id to refill queue
* @refillq: refill queue to post to
* @buf_id: buffer id to post
*/
-static void idpf_rx_post_buf_refill(struct idpf_sw_queue *refillq, u16 buf_id)
+static void idpf_post_buf_refill(struct idpf_sw_queue *refillq, u16 buf_id)
{
u32 nta = refillq->next_to_use;
/* store the buffer ID and the SW maintained GEN bit to the refillq */
refillq->ring[nta] =
- FIELD_PREP(IDPF_RX_BI_BUFID_M, buf_id) |
- FIELD_PREP(IDPF_RX_BI_GEN_M,
+ FIELD_PREP(IDPF_RFL_BI_BUFID_M, buf_id) |
+ FIELD_PREP(IDPF_RFL_BI_GEN_M,
idpf_queue_has(GEN_CHK, refillq));
if (unlikely(++nta == refillq->desc_count)) {
@@ -995,6 +1022,11 @@ static void idpf_txq_group_rel(struct idpf_vport *vport)
struct idpf_txq_group *txq_grp = &vport->txq_grps[i];
for (j = 0; j < txq_grp->num_txq; j++) {
+ if (flow_sch_en) {
+ kfree(txq_grp->txqs[j]->refillq);
+ txq_grp->txqs[j]->refillq = NULL;
+ }
+
kfree(txq_grp->txqs[j]);
txq_grp->txqs[j] = NULL;
}
@@ -1414,6 +1446,13 @@ static int idpf_txq_group_alloc(struct idpf_vport *vport, u16 num_txq)
}
idpf_queue_set(FLOW_SCH_EN, q);
+
+ q->refillq = kzalloc(sizeof(*q->refillq), GFP_KERNEL);
+ if (!q->refillq)
+ goto err_alloc;
+
+ idpf_queue_set(GEN_CHK, q->refillq);
+ idpf_queue_set(RFL_GEN_CHK, q->refillq);
}
if (!split)
@@ -2005,6 +2044,8 @@ static void idpf_tx_handle_rs_completion(struct idpf_tx_queue *txq,
compl_tag = le16_to_cpu(desc->q_head_compl_tag.compl_tag);
+ idpf_post_buf_refill(txq->refillq, compl_tag);
+
/* If we didn't clean anything on the ring, this packet must be
* in the hash table. Go clean it there.
*/
@@ -2364,6 +2405,37 @@ static unsigned int idpf_tx_splitq_bump_ntu(struct idpf_tx_queue *txq, u16 ntu)
return ntu;
}
+/**
+ * idpf_tx_get_free_buf_id - get a free buffer ID from the refill queue
+ * @refillq: refill queue to get buffer ID from
+ * @buf_id: return buffer ID
+ *
+ * Return: true if a buffer ID was found, false if not
+ */
+static bool idpf_tx_get_free_buf_id(struct idpf_sw_queue *refillq,
+ u16 *buf_id)
+{
+ u16 ntc = refillq->next_to_clean;
+ u32 refill_desc;
+
+ refill_desc = refillq->ring[ntc];
+
+ if (unlikely(idpf_queue_has(RFL_GEN_CHK, refillq) !=
+ !!(refill_desc & IDPF_RFL_BI_GEN_M)))
+ return false;
+
+ *buf_id = FIELD_GET(IDPF_RFL_BI_BUFID_M, refill_desc);
+
+ if (unlikely(++ntc == refillq->desc_count)) {
+ idpf_queue_change(RFL_GEN_CHK, refillq);
+ ntc = 0;
+ }
+
+ refillq->next_to_clean = ntc;
+
+ return true;
+}
+
/**
* idpf_tx_splitq_map - Build the Tx flex descriptor
* @tx_q: queue to send buffer on
@@ -2912,6 +2984,13 @@ static netdev_tx_t idpf_tx_splitq_frame(struct sk_buff *skb,
}
if (idpf_queue_has(FLOW_SCH_EN, tx_q)) {
+ if (unlikely(!idpf_tx_get_free_buf_id(tx_q->refillq,
+ &tx_params.compl_tag))) {
+ u64_stats_update_begin(&tx_q->stats_sync);
+ u64_stats_inc(&tx_q->q_stats.q_busy);
+ u64_stats_update_end(&tx_q->stats_sync);
+ }
+
tx_params.dtype = IDPF_TX_DESC_DTYPE_FLEX_FLOW_SCHE;
tx_params.eop_cmd = IDPF_TXD_FLEX_FLOW_CMD_EOP;
/* Set the RE bit to catch any packets that may have not been
@@ -3464,7 +3543,7 @@ static int idpf_rx_splitq_clean(struct idpf_rx_queue *rxq, int budget)
skip_data:
rx_buf->page = NULL;
- idpf_rx_post_buf_refill(refillq, buf_id);
+ idpf_post_buf_refill(refillq, buf_id);
IDPF_RX_BUMP_NTC(rxq, ntc);
/* skip if it is non EOP desc */
@@ -3572,10 +3651,10 @@ static void idpf_rx_clean_refillq(struct idpf_buf_queue *bufq,
bool failure;
if (idpf_queue_has(RFL_GEN_CHK, refillq) !=
- !!(refill_desc & IDPF_RX_BI_GEN_M))
+ !!(refill_desc & IDPF_RFL_BI_GEN_M))
break;
- buf_id = FIELD_GET(IDPF_RX_BI_BUFID_M, refill_desc);
+ buf_id = FIELD_GET(IDPF_RFL_BI_BUFID_M, refill_desc);
failure = idpf_rx_update_bufq_desc(bufq, buf_id, buf_desc);
if (failure)
break;
diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
index 36a0f828a6f8..6924bee6ff5b 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h
+++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
@@ -107,8 +107,8 @@ do { \
*/
#define IDPF_TX_SPLITQ_RE_MIN_GAP 64
-#define IDPF_RX_BI_GEN_M BIT(16)
-#define IDPF_RX_BI_BUFID_M GENMASK(15, 0)
+#define IDPF_RFL_BI_GEN_M BIT(16)
+#define IDPF_RFL_BI_BUFID_M GENMASK(15, 0)
#define IDPF_RXD_EOF_SPLITQ VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_EOF_M
#define IDPF_RXD_EOF_SINGLEQ VIRTCHNL2_RX_BASE_DESC_STATUS_EOF_M
@@ -621,6 +621,7 @@ libeth_cacheline_set_assert(struct idpf_rx_queue, 64,
* @cleaned_pkts: Number of packets cleaned for the above said case
* @tx_max_bufs: Max buffers that can be transmitted with scatter-gather
* @stash: Tx buffer stash for Flow-based scheduling mode
+ * @refillq: Pointer to refill queue
* @compl_tag_bufid_m: Completion tag buffer id mask
* @compl_tag_cur_gen: Used to keep track of current completion tag generation
* @compl_tag_gen_max: To determine when compl_tag_cur_gen should be reset
@@ -670,6 +671,7 @@ struct idpf_tx_queue {
u16 tx_max_bufs;
struct idpf_txq_stash *stash;
+ struct idpf_sw_queue *refillq;
u16 compl_tag_bufid_m;
u16 compl_tag_cur_gen;
@@ -691,7 +693,7 @@ struct idpf_tx_queue {
__cacheline_group_end_aligned(cold);
};
libeth_cacheline_set_assert(struct idpf_tx_queue, 64,
- 112 + sizeof(struct u64_stats_sync),
+ 120 + sizeof(struct u64_stats_sync),
24);
/**
--
2.39.2
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [Intel-wired-lan] [PATCH iwl-net v3 2/6] idpf: improve when to set RE bit logic
2025-07-25 18:42 [Intel-wired-lan] [PATCH iwl-net v3 0/6] idpf: replace Tx flow scheduling buffer ring with buffer pool Joshua Hay
2025-07-25 18:42 ` [Intel-wired-lan] [PATCH iwl-net v3 1/6] idpf: add support for Tx refillqs in flow scheduling mode Joshua Hay
@ 2025-07-25 18:42 ` Joshua Hay
2025-08-12 16:45 ` Salin, Samuel
2025-07-25 18:42 ` [Intel-wired-lan] [PATCH iwl-net v3 3/6] idpf: simplify and fix splitq Tx packet rollback error path Joshua Hay
` (3 subsequent siblings)
5 siblings, 1 reply; 20+ messages in thread
From: Joshua Hay @ 2025-07-25 18:42 UTC (permalink / raw)
To: intel-wired-lan
Cc: netdev, Joshua Hay, Luigi Rizzo, Brian Vazquez, Madhu Chittim
Track the gap between next_to_use and the last RE index. Set RE again
if the gap is large enough to ensure RE bit is set frequently. This is
critical before removing the stashing mechanisms because the
opportunistic descriptor ring cleaning from the out-of-order completions
will go away. Previously the descriptors would be "cleaned" by both the
descriptor (RE) completion and the out-of-order completions. Without the
latter, we must ensure the RE bit is set more frequently. Otherwise,
it's theoretically possible for the descriptor ring next_to_clean to
never advance. The previous implementation was dependent on the start
of a packet falling on a 64th index in the descriptor ring, which is not
guaranteed with large packets.
Signed-off-by: Luigi Rizzo <lrizzo@google.com>
Signed-off-by: Brian Vazquez <brianvv@google.com>
Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
Reviewed-by: Madhu Chittim <madhu.chittim@intel.com>
---
drivers/net/ethernet/intel/idpf/idpf_txrx.c | 20 +++++++++++++++++++-
drivers/net/ethernet/intel/idpf/idpf_txrx.h | 6 ++++--
2 files changed, 23 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
index d5908126163d..3359b3b52625 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
@@ -294,6 +294,8 @@ static int idpf_tx_desc_alloc(const struct idpf_vport *vport,
*/
idpf_queue_change(GEN_CHK, refillq);
+ tx_q->last_re = tx_q->desc_count - IDPF_TX_SPLITQ_RE_MIN_GAP;
+
return 0;
err_alloc:
@@ -2912,6 +2914,21 @@ static void idpf_tx_set_tstamp_desc(union idpf_flex_tx_ctx_desc *ctx_desc,
{ }
#endif /* CONFIG_PTP_1588_CLOCK */
+/**
+ * idpf_tx_splitq_need_re - check whether RE bit needs to be set
+ * @tx_q: pointer to Tx queue
+ *
+ * Return: true if RE bit needs to be set, false otherwise
+ */
+static bool idpf_tx_splitq_need_re(struct idpf_tx_queue *tx_q)
+{
+ int gap = tx_q->next_to_use - tx_q->last_re;
+
+ gap += (gap < 0) ? tx_q->desc_count : 0;
+
+ return gap >= IDPF_TX_SPLITQ_RE_MIN_GAP;
+}
+
/**
* idpf_tx_splitq_frame - Sends buffer on Tx ring using flex descriptors
* @skb: send buffer
@@ -2998,9 +3015,10 @@ static netdev_tx_t idpf_tx_splitq_frame(struct sk_buff *skb,
* MIN_RING size to ensure it will be set at least once each
* time around the ring.
*/
- if (!(tx_q->next_to_use % IDPF_TX_SPLITQ_RE_MIN_GAP)) {
+ if (idpf_tx_splitq_need_re(tx_q)) {
tx_params.eop_cmd |= IDPF_TXD_FLEX_FLOW_CMD_RE;
tx_q->txq_grp->num_completions_pending++;
+ tx_q->last_re = tx_q->next_to_use;
}
if (skb->ip_summed == CHECKSUM_PARTIAL)
diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
index 6924bee6ff5b..65acad4c929d 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h
+++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
@@ -609,6 +609,8 @@ libeth_cacheline_set_assert(struct idpf_rx_queue, 64,
* @netdev: &net_device corresponding to this queue
* @next_to_use: Next descriptor to use
* @next_to_clean: Next descriptor to clean
+ * @last_re: last descriptor index that RE bit was set
+ * @tx_max_bufs: Max buffers that can be transmitted with scatter-gather
* @cleaned_bytes: Splitq only, TXQ only: When a TX completion is received on
* the TX completion queue, it can be for any TXQ associated
* with that completion queue. This means we can clean up to
@@ -619,7 +621,6 @@ libeth_cacheline_set_assert(struct idpf_rx_queue, 64,
* only once at the end of the cleaning routine.
* @clean_budget: singleq only, queue cleaning budget
* @cleaned_pkts: Number of packets cleaned for the above said case
- * @tx_max_bufs: Max buffers that can be transmitted with scatter-gather
* @stash: Tx buffer stash for Flow-based scheduling mode
* @refillq: Pointer to refill queue
* @compl_tag_bufid_m: Completion tag buffer id mask
@@ -662,6 +663,8 @@ struct idpf_tx_queue {
__cacheline_group_begin_aligned(read_write);
u16 next_to_use;
u16 next_to_clean;
+ u16 last_re;
+ u16 tx_max_bufs;
union {
u32 cleaned_bytes;
@@ -669,7 +672,6 @@ struct idpf_tx_queue {
};
u16 cleaned_pkts;
- u16 tx_max_bufs;
struct idpf_txq_stash *stash;
struct idpf_sw_queue *refillq;
--
2.39.2
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [Intel-wired-lan] [PATCH iwl-net v3 3/6] idpf: simplify and fix splitq Tx packet rollback error path
2025-07-25 18:42 [Intel-wired-lan] [PATCH iwl-net v3 0/6] idpf: replace Tx flow scheduling buffer ring with buffer pool Joshua Hay
2025-07-25 18:42 ` [Intel-wired-lan] [PATCH iwl-net v3 1/6] idpf: add support for Tx refillqs in flow scheduling mode Joshua Hay
2025-07-25 18:42 ` [Intel-wired-lan] [PATCH iwl-net v3 2/6] idpf: improve when to set RE bit logic Joshua Hay
@ 2025-07-25 18:42 ` Joshua Hay
2025-08-12 16:45 ` Salin, Samuel
2025-07-25 18:42 ` [Intel-wired-lan] [PATCH iwl-net v3 4/6] idpf: replace flow scheduling buffer ring with buffer pool Joshua Hay
` (2 subsequent siblings)
5 siblings, 1 reply; 20+ messages in thread
From: Joshua Hay @ 2025-07-25 18:42 UTC (permalink / raw)
To: intel-wired-lan; +Cc: netdev, Joshua Hay, Madhu Chittim
Move (and rename) the existing rollback logic to singleq.c since that
will be the only consumer. Create a simplified splitq specific rollback
function to loop through and unmap tx_bufs based on the completion tag.
This is critical before replacing the Tx buffer ring with the buffer
pool since the previous rollback indexing will not work to unmap the
chained buffers from the pool.
Cache the next_to_use index before any portion of the packet is put on
the descriptor ring. In case of an error, the rollback will bump tail to
the correct next_to_use value. Because the splitq path now supports
different types of context descriptors (and potentially multiple in the
future), this will take care of rolling back any and all context
descriptors encoded on the ring for the erroneous packet. The previous
rollback logic was broken for PTP packets since it would not account for
the PTP context descriptor.
Fixes: 1a49cf814fe1 ("idpf: add Tx timestamp flows")
Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
Reviewed-by: Madhu Chittim <madhu.chittim@intel.com>
---
.../ethernet/intel/idpf/idpf_singleq_txrx.c | 57 +++++++++++-
drivers/net/ethernet/intel/idpf/idpf_txrx.c | 91 ++++++++-----------
drivers/net/ethernet/intel/idpf/idpf_txrx.h | 5 +-
3 files changed, 95 insertions(+), 58 deletions(-)
diff --git a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c
index 993c354aa27a..a3b3261bbdfa 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c
@@ -179,6 +179,58 @@ static int idpf_tx_singleq_csum(struct sk_buff *skb,
return 1;
}
+/**
+ * idpf_tx_singleq_dma_map_error - handle TX DMA map errors
+ * @txq: queue to send buffer on
+ * @skb: send buffer
+ * @first: original first buffer info buffer for packet
+ * @idx: starting point on ring to unwind
+ */
+static void idpf_tx_singleq_dma_map_error(struct idpf_tx_queue *txq,
+ struct sk_buff *skb,
+ struct idpf_tx_buf *first, u16 idx)
+{
+ struct libeth_sq_napi_stats ss = { };
+ struct libeth_cq_pp cp = {
+ .dev = txq->dev,
+ .ss = &ss,
+ };
+
+ u64_stats_update_begin(&txq->stats_sync);
+ u64_stats_inc(&txq->q_stats.dma_map_errs);
+ u64_stats_update_end(&txq->stats_sync);
+
+ /* clear dma mappings for failed tx_buf map */
+ for (;;) {
+ struct idpf_tx_buf *tx_buf;
+
+ tx_buf = &txq->tx_buf[idx];
+ libeth_tx_complete(tx_buf, &cp);
+ if (tx_buf == first)
+ break;
+ if (idx == 0)
+ idx = txq->desc_count;
+ idx--;
+ }
+
+ if (skb_is_gso(skb)) {
+ union idpf_tx_flex_desc *tx_desc;
+
+ /* If we failed a DMA mapping for a TSO packet, we will have
+ * used one additional descriptor for a context
+ * descriptor. Reset that here.
+ */
+ tx_desc = &txq->flex_tx[idx];
+ memset(tx_desc, 0, sizeof(*tx_desc));
+ if (idx == 0)
+ idx = txq->desc_count;
+ idx--;
+ }
+
+ /* Update tail in case netdev_xmit_more was previously true */
+ idpf_tx_buf_hw_update(txq, idx, false);
+}
+
/**
* idpf_tx_singleq_map - Build the Tx base descriptor
* @tx_q: queue to send buffer on
@@ -219,8 +271,9 @@ static void idpf_tx_singleq_map(struct idpf_tx_queue *tx_q,
for (frag = &skb_shinfo(skb)->frags[0];; frag++) {
unsigned int max_data = IDPF_TX_MAX_DESC_DATA_ALIGNED;
- if (dma_mapping_error(tx_q->dev, dma))
- return idpf_tx_dma_map_error(tx_q, skb, first, i);
+ if (unlikely(dma_mapping_error(tx_q->dev, dma)))
+ return idpf_tx_singleq_dma_map_error(tx_q, skb,
+ first, i);
/* record length, and DMA address */
dma_unmap_len_set(tx_buf, len, size);
diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
index 3359b3b52625..2362d8588e5b 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
@@ -2339,57 +2339,6 @@ unsigned int idpf_tx_desc_count_required(struct idpf_tx_queue *txq,
return count;
}
-/**
- * idpf_tx_dma_map_error - handle TX DMA map errors
- * @txq: queue to send buffer on
- * @skb: send buffer
- * @first: original first buffer info buffer for packet
- * @idx: starting point on ring to unwind
- */
-void idpf_tx_dma_map_error(struct idpf_tx_queue *txq, struct sk_buff *skb,
- struct idpf_tx_buf *first, u16 idx)
-{
- struct libeth_sq_napi_stats ss = { };
- struct libeth_cq_pp cp = {
- .dev = txq->dev,
- .ss = &ss,
- };
-
- u64_stats_update_begin(&txq->stats_sync);
- u64_stats_inc(&txq->q_stats.dma_map_errs);
- u64_stats_update_end(&txq->stats_sync);
-
- /* clear dma mappings for failed tx_buf map */
- for (;;) {
- struct idpf_tx_buf *tx_buf;
-
- tx_buf = &txq->tx_buf[idx];
- libeth_tx_complete(tx_buf, &cp);
- if (tx_buf == first)
- break;
- if (idx == 0)
- idx = txq->desc_count;
- idx--;
- }
-
- if (skb_is_gso(skb)) {
- union idpf_tx_flex_desc *tx_desc;
-
- /* If we failed a DMA mapping for a TSO packet, we will have
- * used one additional descriptor for a context
- * descriptor. Reset that here.
- */
- tx_desc = &txq->flex_tx[idx];
- memset(tx_desc, 0, sizeof(*tx_desc));
- if (idx == 0)
- idx = txq->desc_count;
- idx--;
- }
-
- /* Update tail in case netdev_xmit_more was previously true */
- idpf_tx_buf_hw_update(txq, idx, false);
-}
-
/**
* idpf_tx_splitq_bump_ntu - adjust NTU and generation
* @txq: the tx ring to wrap
@@ -2438,6 +2387,37 @@ static bool idpf_tx_get_free_buf_id(struct idpf_sw_queue *refillq,
return true;
}
+/**
+ * idpf_tx_splitq_pkt_err_unmap - Unmap buffers and bump tail in case of error
+ * @txq: Tx queue to unwind
+ * @params: pointer to splitq params struct
+ * @first: starting buffer for packet to unmap
+ */
+static void idpf_tx_splitq_pkt_err_unmap(struct idpf_tx_queue *txq,
+ struct idpf_tx_splitq_params *params,
+ struct idpf_tx_buf *first)
+{
+ struct libeth_sq_napi_stats ss = { };
+ struct idpf_tx_buf *tx_buf = first;
+ struct libeth_cq_pp cp = {
+ .dev = txq->dev,
+ .ss = &ss,
+ };
+ u32 idx = 0;
+
+ u64_stats_update_begin(&txq->stats_sync);
+ u64_stats_inc(&txq->q_stats.dma_map_errs);
+ u64_stats_update_end(&txq->stats_sync);
+
+ do {
+ libeth_tx_complete(tx_buf, &cp);
+ idpf_tx_clean_buf_ring_bump_ntc(txq, idx, tx_buf);
+ } while (idpf_tx_buf_compl_tag(tx_buf) == params->compl_tag);
+
+ /* Update tail in case netdev_xmit_more was previously true. */
+ idpf_tx_buf_hw_update(txq, params->prev_ntu, false);
+}
+
/**
* idpf_tx_splitq_map - Build the Tx flex descriptor
* @tx_q: queue to send buffer on
@@ -2482,8 +2462,9 @@ static void idpf_tx_splitq_map(struct idpf_tx_queue *tx_q,
for (frag = &skb_shinfo(skb)->frags[0];; frag++) {
unsigned int max_data = IDPF_TX_MAX_DESC_DATA_ALIGNED;
- if (dma_mapping_error(tx_q->dev, dma))
- return idpf_tx_dma_map_error(tx_q, skb, first, i);
+ if (unlikely(dma_mapping_error(tx_q->dev, dma)))
+ return idpf_tx_splitq_pkt_err_unmap(tx_q, params,
+ first);
first->nr_frags++;
idpf_tx_buf_compl_tag(tx_buf) = params->compl_tag;
@@ -2939,7 +2920,9 @@ static bool idpf_tx_splitq_need_re(struct idpf_tx_queue *tx_q)
static netdev_tx_t idpf_tx_splitq_frame(struct sk_buff *skb,
struct idpf_tx_queue *tx_q)
{
- struct idpf_tx_splitq_params tx_params = { };
+ struct idpf_tx_splitq_params tx_params = {
+ .prev_ntu = tx_q->next_to_use,
+ };
union idpf_flex_tx_ctx_desc *ctx_desc;
struct idpf_tx_buf *first;
unsigned int count;
diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
index 65acad4c929d..e07e21f6a67c 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h
+++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
@@ -195,6 +195,7 @@ struct idpf_tx_offload_params {
* @compl_tag: Associated tag for completion
* @td_tag: Descriptor tunneling tag
* @offload: Offload parameters
+ * @prev_ntu: stored TxQ next_to_use in case of rollback
*/
struct idpf_tx_splitq_params {
enum idpf_tx_desc_dtype_value dtype;
@@ -205,6 +206,8 @@ struct idpf_tx_splitq_params {
};
struct idpf_tx_offload_params offload;
+
+ u16 prev_ntu;
};
enum idpf_tx_ctx_desc_eipt_offload {
@@ -1041,8 +1044,6 @@ void idpf_tx_buf_hw_update(struct idpf_tx_queue *tx_q, u32 val,
bool xmit_more);
unsigned int idpf_size_to_txd_count(unsigned int size);
netdev_tx_t idpf_tx_drop_skb(struct idpf_tx_queue *tx_q, struct sk_buff *skb);
-void idpf_tx_dma_map_error(struct idpf_tx_queue *txq, struct sk_buff *skb,
- struct idpf_tx_buf *first, u16 ring_idx);
unsigned int idpf_tx_desc_count_required(struct idpf_tx_queue *txq,
struct sk_buff *skb);
void idpf_tx_timeout(struct net_device *netdev, unsigned int txqueue);
--
2.39.2
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [Intel-wired-lan] [PATCH iwl-net v3 4/6] idpf: replace flow scheduling buffer ring with buffer pool
2025-07-25 18:42 [Intel-wired-lan] [PATCH iwl-net v3 0/6] idpf: replace Tx flow scheduling buffer ring with buffer pool Joshua Hay
` (2 preceding siblings ...)
2025-07-25 18:42 ` [Intel-wired-lan] [PATCH iwl-net v3 3/6] idpf: simplify and fix splitq Tx packet rollback error path Joshua Hay
@ 2025-07-25 18:42 ` Joshua Hay
2025-08-04 17:02 ` Alexander Lobakin
2025-08-12 16:45 ` Salin, Samuel
2025-07-25 18:42 ` [Intel-wired-lan] [PATCH iwl-net v3 5/6] idpf: stop Tx if there are insufficient buffer resources Joshua Hay
2025-07-25 18:42 ` [Intel-wired-lan] [PATCH iwl-net v3 6/6] idpf: remove obsolete stashing code Joshua Hay
5 siblings, 2 replies; 20+ messages in thread
From: Joshua Hay @ 2025-07-25 18:42 UTC (permalink / raw)
To: intel-wired-lan
Cc: netdev, Joshua Hay, Luigi Rizzo, Brian Vazquez, Madhu Chittim,
Aleksandr Loktionov
Replace the TxQ buffer ring with one large pool/array of buffers (only
for flow scheduling). This eliminates the tag generation and makes it
impossible for a tag to be associated with more than one packet.
The completion tag passed to HW through the descriptor is the index into
the array. That same completion tag is posted back to the driver in the
completion descriptor, and used to index into the array to quickly
retrieve the buffer during cleaning. In this way, the tags are treated
as a fix sized resource. If all tags are in use, no more packets can be
sent on that particular queue (until some are freed up). The tag pool
size is 64K since the completion tag width is 16 bits.
For each packet, the driver pulls a free tag from the refillq to get the
next free buffer index. When cleaning is complete, the tag is posted
back to the refillq. A multi-frag packet spans multiple buffers in the
driver, therefore it uses multiple buffer indexes/tags from the pool.
Each frag pulls from the refillq to get the next free buffer index.
These are tracked in a next_buf field that replaces the completion tag
field in the buffer struct. This chains the buffers together so that the
packet can be cleaned from the starting completion tag taken from the
completion descriptor, then from the next_buf field for each subsequent
buffer.
In case of a dma_mapping_error occurs or the refillq runs out of free
buf_ids, the packet will execute the rollback error path. This unmaps
any buffers previously mapped for the packet. Since several free
buf_ids could have already been pulled from the refillq, we need to
restore its original state as well. Otherwise, the buf_ids/tags
will be leaked and not used again until the queue is reallocated.
Descriptor completions only advance the descriptor ring index to "clean"
the descriptors. The packet completions only clean the buffers
associated with the given packet completion tag and do not update the
descriptor ring index.
When operating in queue based scheduling mode, the array still acts as a
ring and will only have TxQ descriptor count entries. The tx_bufs are
still associated 1:1 with the descriptor ring entries and we can use the
conventional indexing mechanisms.
Fixes: c2d548cad150 ("idpf: add TX splitq napi poll support")
Signed-off-by: Luigi Rizzo <lrizzo@google.com>
Signed-off-by: Brian Vazquez <brianvv@google.com>
Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
Reviewed-by: Madhu Chittim <madhu.chittim@intel.com>
Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com>
---
v3:
- remove unreachable code
v2:
- removed unused buf_size
- miscellaneous cleanup based on changes to prior patches and addition
of packet rollback changes patch
- refactor packet rollback logic to iterate through chained bufs
- add refillq state restore if rollback occurs
---
drivers/net/ethernet/intel/idpf/idpf_txrx.c | 203 +++++++++-----------
drivers/net/ethernet/intel/idpf/idpf_txrx.h | 10 +-
2 files changed, 102 insertions(+), 111 deletions(-)
diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
index 2362d8588e5b..fe7494d373b2 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
@@ -13,6 +13,7 @@ struct idpf_tx_stash {
struct libeth_sqe buf;
};
+#define idpf_tx_buf_next(buf) (*(u32 *)&(buf)->priv)
#define idpf_tx_buf_compl_tag(buf) (*(u32 *)&(buf)->priv)
LIBETH_SQE_CHECK_PRIV(u32);
@@ -91,7 +92,7 @@ static void idpf_tx_buf_rel_all(struct idpf_tx_queue *txq)
return;
/* Free all the Tx buffer sk_buffs */
- for (i = 0; i < txq->desc_count; i++)
+ for (i = 0; i < txq->buf_pool_size; i++)
libeth_tx_complete(&txq->tx_buf[i], &cp);
kfree(txq->tx_buf);
@@ -199,14 +200,16 @@ static void idpf_tx_desc_rel_all(struct idpf_vport *vport)
static int idpf_tx_buf_alloc_all(struct idpf_tx_queue *tx_q)
{
struct idpf_buf_lifo *buf_stack;
- int buf_size;
int i;
/* Allocate book keeping buffers only. Buffers to be supplied to HW
* are allocated by kernel network stack and received as part of skb
*/
- buf_size = sizeof(struct idpf_tx_buf) * tx_q->desc_count;
- tx_q->tx_buf = kzalloc(buf_size, GFP_KERNEL);
+ if (idpf_queue_has(FLOW_SCH_EN, tx_q))
+ tx_q->buf_pool_size = U16_MAX;
+ else
+ tx_q->buf_pool_size = tx_q->desc_count;
+ tx_q->tx_buf = kcalloc(tx_q->buf_pool_size, sizeof(*tx_q->tx_buf), GFP_KERNEL);
if (!tx_q->tx_buf)
return -ENOMEM;
@@ -275,7 +278,7 @@ static int idpf_tx_desc_alloc(const struct idpf_vport *vport,
return 0;
refillq = tx_q->refillq;
- refillq->desc_count = tx_q->desc_count;
+ refillq->desc_count = tx_q->buf_pool_size;
refillq->ring = kcalloc(refillq->desc_count, sizeof(u32),
GFP_KERNEL);
if (!refillq->ring) {
@@ -1869,6 +1872,12 @@ static bool idpf_tx_splitq_clean(struct idpf_tx_queue *tx_q, u16 end,
struct idpf_tx_buf *tx_buf;
bool clean_complete = true;
+ if (descs_only) {
+ /* Bump ring index to mark as cleaned. */
+ tx_q->next_to_clean = end;
+ return true;
+ }
+
tx_desc = &tx_q->flex_tx[ntc];
next_pending_desc = &tx_q->flex_tx[end];
tx_buf = &tx_q->tx_buf[ntc];
@@ -1935,87 +1944,43 @@ do { \
} while (0)
/**
- * idpf_tx_clean_buf_ring - clean flow scheduling TX queue buffers
+ * idpf_tx_clean_bufs - clean flow scheduling TX queue buffers
* @txq: queue to clean
- * @compl_tag: completion tag of packet to clean (from completion descriptor)
+ * @buf_id: packet's starting buffer ID, from completion descriptor
* @cleaned: pointer to stats struct to track cleaned packets/bytes
* @budget: Used to determine if we are in netpoll
*
- * Cleans all buffers associated with the input completion tag either from the
- * TX buffer ring or from the hash table if the buffers were previously
- * stashed. Returns the byte/segment count for the cleaned packet associated
- * this completion tag.
+ * Clean all buffers associated with the packet starting at buf_id. Returns the
+ * byte/segment count for the cleaned packet.
*/
-static bool idpf_tx_clean_buf_ring(struct idpf_tx_queue *txq, u16 compl_tag,
- struct libeth_sq_napi_stats *cleaned,
- int budget)
+static bool idpf_tx_clean_bufs(struct idpf_tx_queue *txq, u32 buf_id,
+ struct libeth_sq_napi_stats *cleaned,
+ int budget)
{
- u16 idx = compl_tag & txq->compl_tag_bufid_m;
struct idpf_tx_buf *tx_buf = NULL;
struct libeth_cq_pp cp = {
.dev = txq->dev,
.ss = cleaned,
.napi = budget,
};
- u16 ntc, orig_idx = idx;
-
- tx_buf = &txq->tx_buf[idx];
-
- if (unlikely(tx_buf->type <= LIBETH_SQE_CTX ||
- idpf_tx_buf_compl_tag(tx_buf) != compl_tag))
- return false;
+ tx_buf = &txq->tx_buf[buf_id];
if (tx_buf->type == LIBETH_SQE_SKB) {
if (skb_shinfo(tx_buf->skb)->tx_flags & SKBTX_IN_PROGRESS)
idpf_tx_read_tstamp(txq, tx_buf->skb);
libeth_tx_complete(tx_buf, &cp);
+ idpf_post_buf_refill(txq->refillq, buf_id);
}
- idpf_tx_clean_buf_ring_bump_ntc(txq, idx, tx_buf);
+ while (idpf_tx_buf_next(tx_buf) != IDPF_TXBUF_NULL) {
+ u16 buf_id = idpf_tx_buf_next(tx_buf);
- while (idpf_tx_buf_compl_tag(tx_buf) == compl_tag) {
+ tx_buf = &txq->tx_buf[buf_id];
libeth_tx_complete(tx_buf, &cp);
- idpf_tx_clean_buf_ring_bump_ntc(txq, idx, tx_buf);
+ idpf_post_buf_refill(txq->refillq, buf_id);
}
- /*
- * It's possible the packet we just cleaned was an out of order
- * completion, which means we can stash the buffers starting from
- * the original next_to_clean and reuse the descriptors. We need
- * to compare the descriptor ring next_to_clean packet's "first" buffer
- * to the "first" buffer of the packet we just cleaned to determine if
- * this is the case. Howevever, next_to_clean can point to either a
- * reserved buffer that corresponds to a context descriptor used for the
- * next_to_clean packet (TSO packet) or the "first" buffer (single
- * packet). The orig_idx from the packet we just cleaned will always
- * point to the "first" buffer. If next_to_clean points to a reserved
- * buffer, let's bump ntc once and start the comparison from there.
- */
- ntc = txq->next_to_clean;
- tx_buf = &txq->tx_buf[ntc];
-
- if (tx_buf->type == LIBETH_SQE_CTX)
- idpf_tx_clean_buf_ring_bump_ntc(txq, ntc, tx_buf);
-
- /*
- * If ntc still points to a different "first" buffer, clean the
- * descriptor ring and stash all of the buffers for later cleaning. If
- * we cannot stash all of the buffers, next_to_clean will point to the
- * "first" buffer of the packet that could not be stashed and cleaning
- * will start there next time.
- */
- if (unlikely(tx_buf != &txq->tx_buf[orig_idx] &&
- !idpf_tx_splitq_clean(txq, orig_idx, budget, cleaned,
- true)))
- return true;
-
- /*
- * Otherwise, update next_to_clean to reflect the cleaning that was
- * done above.
- */
- txq->next_to_clean = idx;
-
return true;
}
@@ -2046,12 +2011,10 @@ static void idpf_tx_handle_rs_completion(struct idpf_tx_queue *txq,
compl_tag = le16_to_cpu(desc->q_head_compl_tag.compl_tag);
- idpf_post_buf_refill(txq->refillq, compl_tag);
-
/* If we didn't clean anything on the ring, this packet must be
* in the hash table. Go clean it there.
*/
- if (!idpf_tx_clean_buf_ring(txq, compl_tag, cleaned, budget))
+ if (!idpf_tx_clean_bufs(txq, compl_tag, cleaned, budget))
idpf_tx_clean_stashed_bufs(txq, compl_tag, cleaned, budget);
}
@@ -2364,7 +2327,7 @@ static unsigned int idpf_tx_splitq_bump_ntu(struct idpf_tx_queue *txq, u16 ntu)
* Return: true if a buffer ID was found, false if not
*/
static bool idpf_tx_get_free_buf_id(struct idpf_sw_queue *refillq,
- u16 *buf_id)
+ u32 *buf_id)
{
u16 ntc = refillq->next_to_clean;
u32 refill_desc;
@@ -2397,25 +2360,34 @@ static void idpf_tx_splitq_pkt_err_unmap(struct idpf_tx_queue *txq,
struct idpf_tx_splitq_params *params,
struct idpf_tx_buf *first)
{
+ struct idpf_sw_queue *refillq = txq->refillq;
struct libeth_sq_napi_stats ss = { };
struct idpf_tx_buf *tx_buf = first;
struct libeth_cq_pp cp = {
.dev = txq->dev,
.ss = &ss,
};
- u32 idx = 0;
u64_stats_update_begin(&txq->stats_sync);
u64_stats_inc(&txq->q_stats.dma_map_errs);
u64_stats_update_end(&txq->stats_sync);
- do {
+ libeth_tx_complete(tx_buf, &cp);
+ while (idpf_tx_buf_next(tx_buf) != IDPF_TXBUF_NULL) {
+ tx_buf = &txq->tx_buf[idpf_tx_buf_next(tx_buf)];
libeth_tx_complete(tx_buf, &cp);
- idpf_tx_clean_buf_ring_bump_ntc(txq, idx, tx_buf);
- } while (idpf_tx_buf_compl_tag(tx_buf) == params->compl_tag);
+ }
/* Update tail in case netdev_xmit_more was previously true. */
idpf_tx_buf_hw_update(txq, params->prev_ntu, false);
+
+ if (!refillq)
+ return;
+
+ /* Restore refillq state to avoid leaking tags. */
+ if (params->prev_refill_gen != idpf_queue_has(RFL_GEN_CHK, refillq))
+ idpf_queue_change(RFL_GEN_CHK, refillq);
+ refillq->next_to_clean = params->prev_refill_ntc;
}
/**
@@ -2439,6 +2411,7 @@ static void idpf_tx_splitq_map(struct idpf_tx_queue *tx_q,
struct netdev_queue *nq;
struct sk_buff *skb;
skb_frag_t *frag;
+ u32 next_buf_id;
u16 td_cmd = 0;
dma_addr_t dma;
@@ -2456,18 +2429,16 @@ static void idpf_tx_splitq_map(struct idpf_tx_queue *tx_q,
tx_buf = first;
first->nr_frags = 0;
- params->compl_tag =
- (tx_q->compl_tag_cur_gen << tx_q->compl_tag_gen_s) | i;
-
for (frag = &skb_shinfo(skb)->frags[0];; frag++) {
unsigned int max_data = IDPF_TX_MAX_DESC_DATA_ALIGNED;
- if (unlikely(dma_mapping_error(tx_q->dev, dma)))
+ if (unlikely(dma_mapping_error(tx_q->dev, dma))) {
+ idpf_tx_buf_next(tx_buf) = IDPF_TXBUF_NULL;
return idpf_tx_splitq_pkt_err_unmap(tx_q, params,
first);
+ }
first->nr_frags++;
- idpf_tx_buf_compl_tag(tx_buf) = params->compl_tag;
tx_buf->type = LIBETH_SQE_FRAG;
/* record length, and DMA address */
@@ -2523,29 +2494,14 @@ static void idpf_tx_splitq_map(struct idpf_tx_queue *tx_q,
max_data);
if (unlikely(++i == tx_q->desc_count)) {
- tx_buf = tx_q->tx_buf;
tx_desc = &tx_q->flex_tx[0];
i = 0;
tx_q->compl_tag_cur_gen =
IDPF_TX_ADJ_COMPL_TAG_GEN(tx_q);
} else {
- tx_buf++;
tx_desc++;
}
- /* Since this packet has a buffer that is going to span
- * multiple descriptors, it's going to leave holes in
- * to the TX buffer ring. To ensure these holes do not
- * cause issues in the cleaning routines, we will clear
- * them of any stale data and assign them the same
- * completion tag as the current packet. Then when the
- * packet is being cleaned, the cleaning routines will
- * simply pass over these holes and finish cleaning the
- * rest of the packet.
- */
- tx_buf->type = LIBETH_SQE_EMPTY;
- idpf_tx_buf_compl_tag(tx_buf) = params->compl_tag;
-
/* Adjust the DMA offset and the remaining size of the
* fragment. On the first iteration of this loop,
* max_data will be >= 12K and <= 16K-1. On any
@@ -2570,15 +2526,26 @@ static void idpf_tx_splitq_map(struct idpf_tx_queue *tx_q,
idpf_tx_splitq_build_desc(tx_desc, params, td_cmd, size);
if (unlikely(++i == tx_q->desc_count)) {
- tx_buf = tx_q->tx_buf;
tx_desc = &tx_q->flex_tx[0];
i = 0;
tx_q->compl_tag_cur_gen = IDPF_TX_ADJ_COMPL_TAG_GEN(tx_q);
} else {
- tx_buf++;
tx_desc++;
}
+ if (idpf_queue_has(FLOW_SCH_EN, tx_q)) {
+ if (unlikely(!idpf_tx_get_free_buf_id(tx_q->refillq,
+ &next_buf_id))) {
+ idpf_tx_buf_next(tx_buf) = IDPF_TXBUF_NULL;
+ return idpf_tx_splitq_pkt_err_unmap(tx_q, params,
+ first);
+ }
+ } else {
+ next_buf_id = i;
+ }
+ idpf_tx_buf_next(tx_buf) = next_buf_id;
+ tx_buf = &tx_q->tx_buf[next_buf_id];
+
size = skb_frag_size(frag);
data_len -= size;
@@ -2593,6 +2560,7 @@ static void idpf_tx_splitq_map(struct idpf_tx_queue *tx_q,
/* write last descriptor with RS and EOP bits */
first->rs_idx = i;
+ idpf_tx_buf_next(tx_buf) = IDPF_TXBUF_NULL;
td_cmd |= params->eop_cmd;
idpf_tx_splitq_build_desc(tx_desc, params, td_cmd, size);
i = idpf_tx_splitq_bump_ntu(tx_q, i);
@@ -2801,8 +2769,6 @@ idpf_tx_splitq_get_ctx_desc(struct idpf_tx_queue *txq)
union idpf_flex_tx_ctx_desc *desc;
int i = txq->next_to_use;
- txq->tx_buf[i].type = LIBETH_SQE_CTX;
-
/* grab the next descriptor */
desc = &txq->flex_ctx[i];
txq->next_to_use = idpf_tx_splitq_bump_ntu(txq, i);
@@ -2927,6 +2893,7 @@ static netdev_tx_t idpf_tx_splitq_frame(struct sk_buff *skb,
struct idpf_tx_buf *first;
unsigned int count;
int tso, idx;
+ u32 buf_id;
count = idpf_tx_desc_count_required(tx_q, skb);
if (unlikely(!count))
@@ -2970,26 +2937,28 @@ static netdev_tx_t idpf_tx_splitq_frame(struct sk_buff *skb,
idpf_tx_set_tstamp_desc(ctx_desc, idx);
}
- /* record the location of the first descriptor for this packet */
- first = &tx_q->tx_buf[tx_q->next_to_use];
- first->skb = skb;
+ if (idpf_queue_has(FLOW_SCH_EN, tx_q)) {
+ struct idpf_sw_queue *refillq = tx_q->refillq;
- if (tso) {
- first->packets = tx_params.offload.tso_segs;
- first->bytes = skb->len +
- ((first->packets - 1) * tx_params.offload.tso_hdr_len);
- } else {
- first->packets = 1;
- first->bytes = max_t(unsigned int, skb->len, ETH_ZLEN);
- }
+ /* Save refillq state in case of a packet rollback. Otherwise,
+ * the tags will be leaked since they will be popped from the
+ * refillq but never reposted during cleaning.
+ */
+ tx_params.prev_refill_gen =
+ idpf_queue_has(RFL_GEN_CHK, refillq);
+ tx_params.prev_refill_ntc = refillq->next_to_clean;
- if (idpf_queue_has(FLOW_SCH_EN, tx_q)) {
if (unlikely(!idpf_tx_get_free_buf_id(tx_q->refillq,
- &tx_params.compl_tag))) {
- u64_stats_update_begin(&tx_q->stats_sync);
- u64_stats_inc(&tx_q->q_stats.q_busy);
- u64_stats_update_end(&tx_q->stats_sync);
+ &buf_id))) {
+ if (tx_params.prev_refill_gen !=
+ idpf_queue_has(RFL_GEN_CHK, refillq))
+ idpf_queue_change(RFL_GEN_CHK, refillq);
+ refillq->next_to_clean = tx_params.prev_refill_ntc;
+
+ tx_q->next_to_use = tx_params.prev_ntu;
+ return idpf_tx_drop_skb(tx_q, skb);
}
+ tx_params.compl_tag = buf_id;
tx_params.dtype = IDPF_TX_DESC_DTYPE_FLEX_FLOW_SCHE;
tx_params.eop_cmd = IDPF_TXD_FLEX_FLOW_CMD_EOP;
@@ -3008,6 +2977,8 @@ static netdev_tx_t idpf_tx_splitq_frame(struct sk_buff *skb,
tx_params.offload.td_cmd |= IDPF_TXD_FLEX_FLOW_CMD_CS_EN;
} else {
+ buf_id = tx_q->next_to_use;
+
tx_params.dtype = IDPF_TX_DESC_DTYPE_FLEX_L2TAG1_L2TAG2;
tx_params.eop_cmd = IDPF_TXD_LAST_DESC_CMD;
@@ -3015,6 +2986,18 @@ static netdev_tx_t idpf_tx_splitq_frame(struct sk_buff *skb,
tx_params.offload.td_cmd |= IDPF_TX_FLEX_DESC_CMD_CS_EN;
}
+ first = &tx_q->tx_buf[buf_id];
+ first->skb = skb;
+
+ if (tso) {
+ first->packets = tx_params.offload.tso_segs;
+ first->bytes = skb->len +
+ ((first->packets - 1) * tx_params.offload.tso_hdr_len);
+ } else {
+ first->packets = 1;
+ first->bytes = max_t(unsigned int, skb->len, ETH_ZLEN);
+ }
+
idpf_tx_splitq_map(tx_q, &tx_params, first);
return NETDEV_TX_OK;
diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
index e07e21f6a67c..10ec37d10081 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h
+++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
@@ -136,6 +136,8 @@ do { \
((++(txq)->compl_tag_cur_gen) >= (txq)->compl_tag_gen_max ? \
0 : (txq)->compl_tag_cur_gen)
+#define IDPF_TXBUF_NULL U32_MAX
+
#define IDPF_TXD_LAST_DESC_CMD (IDPF_TX_DESC_CMD_EOP | IDPF_TX_DESC_CMD_RS)
#define IDPF_TX_FLAGS_TSO BIT(0)
@@ -196,6 +198,8 @@ struct idpf_tx_offload_params {
* @td_tag: Descriptor tunneling tag
* @offload: Offload parameters
* @prev_ntu: stored TxQ next_to_use in case of rollback
+ * @prev_refill_ntc: stored refillq next_to_clean in case of packet rollback
+ * @prev_refill_gen: stored refillq generation bit in case of packet rollback
*/
struct idpf_tx_splitq_params {
enum idpf_tx_desc_dtype_value dtype;
@@ -208,6 +212,8 @@ struct idpf_tx_splitq_params {
struct idpf_tx_offload_params offload;
u16 prev_ntu;
+ u16 prev_refill_ntc;
+ bool prev_refill_gen;
};
enum idpf_tx_ctx_desc_eipt_offload {
@@ -637,6 +643,7 @@ libeth_cacheline_set_assert(struct idpf_rx_queue, 64,
* @size: Length of descriptor ring in bytes
* @dma: Physical address of ring
* @q_vector: Backreference to associated vector
+ * @buf_pool_size: Total number of idpf_tx_buf
*/
struct idpf_tx_queue {
__cacheline_group_begin_aligned(read_mostly);
@@ -695,11 +702,12 @@ struct idpf_tx_queue {
dma_addr_t dma;
struct idpf_q_vector *q_vector;
+ u32 buf_pool_size;
__cacheline_group_end_aligned(cold);
};
libeth_cacheline_set_assert(struct idpf_tx_queue, 64,
120 + sizeof(struct u64_stats_sync),
- 24);
+ 32);
/**
* struct idpf_buf_queue - software structure representing a buffer queue
--
2.39.2
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [Intel-wired-lan] [PATCH iwl-net v3 5/6] idpf: stop Tx if there are insufficient buffer resources
2025-07-25 18:42 [Intel-wired-lan] [PATCH iwl-net v3 0/6] idpf: replace Tx flow scheduling buffer ring with buffer pool Joshua Hay
` (3 preceding siblings ...)
2025-07-25 18:42 ` [Intel-wired-lan] [PATCH iwl-net v3 4/6] idpf: replace flow scheduling buffer ring with buffer pool Joshua Hay
@ 2025-07-25 18:42 ` Joshua Hay
2025-08-12 16:45 ` Salin, Samuel
2025-07-25 18:42 ` [Intel-wired-lan] [PATCH iwl-net v3 6/6] idpf: remove obsolete stashing code Joshua Hay
5 siblings, 1 reply; 20+ messages in thread
From: Joshua Hay @ 2025-07-25 18:42 UTC (permalink / raw)
To: intel-wired-lan; +Cc: netdev, Joshua Hay, Madhu Chittim
The Tx refillq logic will cause packets to be silently dropped if there
are not enough buffer resources available to send a packet in flow
scheduling mode. Instead, determine how many buffers are needed along
with number of descriptors. Make sure there are enough of both resources
to send the packet, and stop the queue if not.
Fixes: 7292af042bcf ("idpf: fix a race in txq wakeup")
Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
Reviewed-by: Madhu Chittim <madhu.chittim@intel.com>
---
v2:
- Init buf_count to 1 and += nr_frags to account for header
- s/unsigned int/u32 where appropriate
- replaced BUFS_UNUSED macro with static inline func
---
.../ethernet/intel/idpf/idpf_singleq_txrx.c | 4 +-
drivers/net/ethernet/intel/idpf/idpf_txrx.c | 47 +++++++++++++------
drivers/net/ethernet/intel/idpf/idpf_txrx.h | 15 +++++-
3 files changed, 47 insertions(+), 19 deletions(-)
diff --git a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c
index a3b3261bbdfa..bf9b820c8330 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c
@@ -415,11 +415,11 @@ netdev_tx_t idpf_tx_singleq_frame(struct sk_buff *skb,
{
struct idpf_tx_offload_params offload = { };
struct idpf_tx_buf *first;
+ u32 count, buf_count = 1;
int csum, tso, needed;
- unsigned int count;
__be16 protocol;
- count = idpf_tx_desc_count_required(tx_q, skb);
+ count = idpf_tx_res_count_required(tx_q, skb, &buf_count);
if (unlikely(!count))
return idpf_tx_drop_skb(tx_q, skb);
diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
index fe7494d373b2..6563d5831a23 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
@@ -2190,15 +2190,22 @@ void idpf_tx_splitq_build_flow_desc(union idpf_tx_flex_desc *desc,
desc->flow.qw1.compl_tag = cpu_to_le16(params->compl_tag);
}
-/* Global conditions to tell whether the txq (and related resources)
- * has room to allow the use of "size" descriptors.
+/**
+ * idpf_tx_splitq_has_room - check if enough Tx splitq resources are available
+ * @tx_q: the queue to be checked
+ * @descs_needed: number of descriptors required for this packet
+ * @bufs_needed: number of Tx buffers required for this packet
+ *
+ * Return: 0 if no room available, 1 otherwise
*/
-static int idpf_txq_has_room(struct idpf_tx_queue *tx_q, u32 size)
+static int idpf_txq_has_room(struct idpf_tx_queue *tx_q, u32 descs_needed,
+ u32 bufs_needed)
{
- if (IDPF_DESC_UNUSED(tx_q) < size ||
+ if (IDPF_DESC_UNUSED(tx_q) < descs_needed ||
IDPF_TX_COMPLQ_PENDING(tx_q->txq_grp) >
IDPF_TX_COMPLQ_OVERFLOW_THRESH(tx_q->txq_grp->complq) ||
- IDPF_TX_BUF_RSV_LOW(tx_q))
+ IDPF_TX_BUF_RSV_LOW(tx_q) ||
+ idpf_tx_splitq_get_free_bufs(tx_q->refillq) < bufs_needed)
return 0;
return 1;
}
@@ -2207,14 +2214,21 @@ static int idpf_txq_has_room(struct idpf_tx_queue *tx_q, u32 size)
* idpf_tx_maybe_stop_splitq - 1st level check for Tx splitq stop conditions
* @tx_q: the queue to be checked
* @descs_needed: number of descriptors required for this packet
+ * @bufs_needed: number of buffers needed for this packet
*
- * Returns 0 if stop is not needed
+ * Return: 0 if stop is not needed
*/
static int idpf_tx_maybe_stop_splitq(struct idpf_tx_queue *tx_q,
- unsigned int descs_needed)
+ u32 descs_needed,
+ u32 bufs_needed)
{
+ /* Since we have multiple resources to check for splitq, our
+ * start,stop_thrs becomes a boolean check instead of a count
+ * threshold.
+ */
if (netif_subqueue_maybe_stop(tx_q->netdev, tx_q->idx,
- idpf_txq_has_room(tx_q, descs_needed),
+ idpf_txq_has_room(tx_q, descs_needed,
+ bufs_needed),
1, 1))
return 0;
@@ -2256,14 +2270,16 @@ void idpf_tx_buf_hw_update(struct idpf_tx_queue *tx_q, u32 val,
}
/**
- * idpf_tx_desc_count_required - calculate number of Tx descriptors needed
+ * idpf_tx_res_count_required - get number of Tx resources needed for this pkt
* @txq: queue to send buffer on
* @skb: send buffer
+ * @bufs_needed: (output) number of buffers needed for this skb.
*
- * Returns number of data descriptors needed for this skb.
+ * Return: number of data descriptors and buffers needed for this skb.
*/
-unsigned int idpf_tx_desc_count_required(struct idpf_tx_queue *txq,
- struct sk_buff *skb)
+unsigned int idpf_tx_res_count_required(struct idpf_tx_queue *txq,
+ struct sk_buff *skb,
+ u32 *bufs_needed)
{
const struct skb_shared_info *shinfo;
unsigned int count = 0, i;
@@ -2274,6 +2290,7 @@ unsigned int idpf_tx_desc_count_required(struct idpf_tx_queue *txq,
return count;
shinfo = skb_shinfo(skb);
+ *bufs_needed += shinfo->nr_frags;
for (i = 0; i < shinfo->nr_frags; i++) {
unsigned int size;
@@ -2891,11 +2908,11 @@ static netdev_tx_t idpf_tx_splitq_frame(struct sk_buff *skb,
};
union idpf_flex_tx_ctx_desc *ctx_desc;
struct idpf_tx_buf *first;
- unsigned int count;
+ u32 count, buf_count = 1;
int tso, idx;
u32 buf_id;
- count = idpf_tx_desc_count_required(tx_q, skb);
+ count = idpf_tx_res_count_required(tx_q, skb, &buf_count);
if (unlikely(!count))
return idpf_tx_drop_skb(tx_q, skb);
@@ -2905,7 +2922,7 @@ static netdev_tx_t idpf_tx_splitq_frame(struct sk_buff *skb,
/* Check for splitq specific TX resources */
count += (IDPF_TX_DESCS_PER_CACHE_LINE + tso);
- if (idpf_tx_maybe_stop_splitq(tx_q, count)) {
+ if (idpf_tx_maybe_stop_splitq(tx_q, count, buf_count)) {
idpf_tx_buf_hw_update(tx_q, tx_q->next_to_use, false);
return NETDEV_TX_BUSY;
diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
index 10ec37d10081..36020db47880 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h
+++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
@@ -1025,6 +1025,17 @@ static inline void idpf_vport_intr_set_wb_on_itr(struct idpf_q_vector *q_vector)
reg->dyn_ctl);
}
+/**
+ * idpf_tx_splitq_get_free_bufs - get number of free buf_ids in refillq
+ * @refillq: pointer to refillq containing buf_ids
+ */
+static inline u32 idpf_tx_splitq_get_free_bufs(struct idpf_sw_queue *refillq)
+{
+ return (refillq->next_to_use > refillq->next_to_clean ?
+ 0 : refillq->desc_count) +
+ refillq->next_to_use - refillq->next_to_clean - 1;
+}
+
int idpf_vport_singleq_napi_poll(struct napi_struct *napi, int budget);
void idpf_vport_init_num_qs(struct idpf_vport *vport,
struct virtchnl2_create_vport *vport_msg);
@@ -1052,8 +1063,8 @@ void idpf_tx_buf_hw_update(struct idpf_tx_queue *tx_q, u32 val,
bool xmit_more);
unsigned int idpf_size_to_txd_count(unsigned int size);
netdev_tx_t idpf_tx_drop_skb(struct idpf_tx_queue *tx_q, struct sk_buff *skb);
-unsigned int idpf_tx_desc_count_required(struct idpf_tx_queue *txq,
- struct sk_buff *skb);
+unsigned int idpf_tx_res_count_required(struct idpf_tx_queue *txq,
+ struct sk_buff *skb, u32 *buf_count);
void idpf_tx_timeout(struct net_device *netdev, unsigned int txqueue);
netdev_tx_t idpf_tx_singleq_frame(struct sk_buff *skb,
struct idpf_tx_queue *tx_q);
--
2.39.2
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [Intel-wired-lan] [PATCH iwl-net v3 6/6] idpf: remove obsolete stashing code
2025-07-25 18:42 [Intel-wired-lan] [PATCH iwl-net v3 0/6] idpf: replace Tx flow scheduling buffer ring with buffer pool Joshua Hay
` (4 preceding siblings ...)
2025-07-25 18:42 ` [Intel-wired-lan] [PATCH iwl-net v3 5/6] idpf: stop Tx if there are insufficient buffer resources Joshua Hay
@ 2025-07-25 18:42 ` Joshua Hay
2025-07-28 9:04 ` Loktionov, Aleksandr
2025-08-12 16:45 ` Salin, Samuel
5 siblings, 2 replies; 20+ messages in thread
From: Joshua Hay @ 2025-07-25 18:42 UTC (permalink / raw)
To: intel-wired-lan; +Cc: netdev, Joshua Hay, Madhu Chittim
With the new Tx buffer management scheme, there is no need for all of
the stashing mechanisms, the hash table, the reserve buffer stack, etc.
Remove all of that.
Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
Reviewed-by: Madhu Chittim <madhu.chittim@intel.com>
---
v3: update comment format
---
drivers/net/ethernet/intel/idpf/idpf_txrx.c | 314 ++------------------
drivers/net/ethernet/intel/idpf/idpf_txrx.h | 47 +--
2 files changed, 22 insertions(+), 339 deletions(-)
diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
index 6563d5831a23..a12cfad566a7 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
@@ -8,48 +8,12 @@
#include "idpf_ptp.h"
#include "idpf_virtchnl.h"
-struct idpf_tx_stash {
- struct hlist_node hlist;
- struct libeth_sqe buf;
-};
-
#define idpf_tx_buf_next(buf) (*(u32 *)&(buf)->priv)
-#define idpf_tx_buf_compl_tag(buf) (*(u32 *)&(buf)->priv)
LIBETH_SQE_CHECK_PRIV(u32);
static bool idpf_chk_linearize(struct sk_buff *skb, unsigned int max_bufs,
unsigned int count);
-/**
- * idpf_buf_lifo_push - push a buffer pointer onto stack
- * @stack: pointer to stack struct
- * @buf: pointer to buf to push
- *
- * Returns 0 on success, negative on failure
- **/
-static int idpf_buf_lifo_push(struct idpf_buf_lifo *stack,
- struct idpf_tx_stash *buf)
-{
- if (unlikely(stack->top == stack->size))
- return -ENOSPC;
-
- stack->bufs[stack->top++] = buf;
-
- return 0;
-}
-
-/**
- * idpf_buf_lifo_pop - pop a buffer pointer from stack
- * @stack: pointer to stack struct
- **/
-static struct idpf_tx_stash *idpf_buf_lifo_pop(struct idpf_buf_lifo *stack)
-{
- if (unlikely(!stack->top))
- return NULL;
-
- return stack->bufs[--stack->top];
-}
-
/**
* idpf_tx_timeout - Respond to a Tx Hang
* @netdev: network interface device structure
@@ -78,14 +42,11 @@ void idpf_tx_timeout(struct net_device *netdev, unsigned int txqueue)
static void idpf_tx_buf_rel_all(struct idpf_tx_queue *txq)
{
struct libeth_sq_napi_stats ss = { };
- struct idpf_buf_lifo *buf_stack;
- struct idpf_tx_stash *stash;
struct libeth_cq_pp cp = {
.dev = txq->dev,
.ss = &ss,
};
- struct hlist_node *tmp;
- u32 i, tag;
+ u32 i;
/* Buffers already cleared, nothing to do */
if (!txq->tx_buf)
@@ -97,33 +58,6 @@ static void idpf_tx_buf_rel_all(struct idpf_tx_queue *txq)
kfree(txq->tx_buf);
txq->tx_buf = NULL;
-
- if (!idpf_queue_has(FLOW_SCH_EN, txq))
- return;
-
- buf_stack = &txq->stash->buf_stack;
- if (!buf_stack->bufs)
- return;
-
- /*
- * If a Tx timeout occurred, there are potentially still bufs in the
- * hash table, free them here.
- */
- hash_for_each_safe(txq->stash->sched_buf_hash, tag, tmp, stash,
- hlist) {
- if (!stash)
- continue;
-
- libeth_tx_complete(&stash->buf, &cp);
- hash_del(&stash->hlist);
- idpf_buf_lifo_push(buf_stack, stash);
- }
-
- for (i = 0; i < buf_stack->size; i++)
- kfree(buf_stack->bufs[i]);
-
- kfree(buf_stack->bufs);
- buf_stack->bufs = NULL;
}
/**
@@ -199,9 +133,6 @@ static void idpf_tx_desc_rel_all(struct idpf_vport *vport)
*/
static int idpf_tx_buf_alloc_all(struct idpf_tx_queue *tx_q)
{
- struct idpf_buf_lifo *buf_stack;
- int i;
-
/* Allocate book keeping buffers only. Buffers to be supplied to HW
* are allocated by kernel network stack and received as part of skb
*/
@@ -213,29 +144,6 @@ static int idpf_tx_buf_alloc_all(struct idpf_tx_queue *tx_q)
if (!tx_q->tx_buf)
return -ENOMEM;
- if (!idpf_queue_has(FLOW_SCH_EN, tx_q))
- return 0;
-
- buf_stack = &tx_q->stash->buf_stack;
-
- /* Initialize tx buf stack for out-of-order completions if
- * flow scheduling offload is enabled
- */
- buf_stack->bufs = kcalloc(tx_q->desc_count, sizeof(*buf_stack->bufs),
- GFP_KERNEL);
- if (!buf_stack->bufs)
- return -ENOMEM;
-
- buf_stack->size = tx_q->desc_count;
- buf_stack->top = tx_q->desc_count;
-
- for (i = 0; i < tx_q->desc_count; i++) {
- buf_stack->bufs[i] = kzalloc(sizeof(*buf_stack->bufs[i]),
- GFP_KERNEL);
- if (!buf_stack->bufs[i])
- return -ENOMEM;
- }
-
return 0;
}
@@ -349,8 +257,6 @@ static int idpf_tx_desc_alloc_all(struct idpf_vport *vport)
for (i = 0; i < vport->num_txq_grp; i++) {
for (j = 0; j < vport->txq_grps[i].num_txq; j++) {
struct idpf_tx_queue *txq = vport->txq_grps[i].txqs[j];
- u8 gen_bits = 0;
- u16 bufidx_mask;
err = idpf_tx_desc_alloc(vport, txq);
if (err) {
@@ -359,34 +265,6 @@ static int idpf_tx_desc_alloc_all(struct idpf_vport *vport)
i);
goto err_out;
}
-
- if (!idpf_is_queue_model_split(vport->txq_model))
- continue;
-
- txq->compl_tag_cur_gen = 0;
-
- /* Determine the number of bits in the bufid
- * mask and add one to get the start of the
- * generation bits
- */
- bufidx_mask = txq->desc_count - 1;
- while (bufidx_mask >> 1) {
- txq->compl_tag_gen_s++;
- bufidx_mask = bufidx_mask >> 1;
- }
- txq->compl_tag_gen_s++;
-
- gen_bits = IDPF_TX_SPLITQ_COMPL_TAG_WIDTH -
- txq->compl_tag_gen_s;
- txq->compl_tag_gen_max = GETMAXVAL(gen_bits);
-
- /* Set bufid mask based on location of first
- * gen bit; it cannot simply be the descriptor
- * ring size-1 since we can have size values
- * where not all of those bits are set.
- */
- txq->compl_tag_bufid_m =
- GETMAXVAL(txq->compl_tag_gen_s);
}
if (!idpf_is_queue_model_split(vport->txq_model))
@@ -1041,9 +919,6 @@ static void idpf_txq_group_rel(struct idpf_vport *vport)
kfree(txq_grp->complq);
txq_grp->complq = NULL;
-
- if (flow_sch_en)
- kfree(txq_grp->stashes);
}
kfree(vport->txq_grps);
vport->txq_grps = NULL;
@@ -1404,7 +1279,6 @@ static int idpf_txq_group_alloc(struct idpf_vport *vport, u16 num_txq)
for (i = 0; i < vport->num_txq_grp; i++) {
struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i];
struct idpf_adapter *adapter = vport->adapter;
- struct idpf_txq_stash *stashes;
int j;
tx_qgrp->vport = vport;
@@ -1417,15 +1291,6 @@ static int idpf_txq_group_alloc(struct idpf_vport *vport, u16 num_txq)
goto err_alloc;
}
- if (split && flow_sch_en) {
- stashes = kcalloc(num_txq, sizeof(*stashes),
- GFP_KERNEL);
- if (!stashes)
- goto err_alloc;
-
- tx_qgrp->stashes = stashes;
- }
-
for (j = 0; j < tx_qgrp->num_txq; j++) {
struct idpf_tx_queue *q = tx_qgrp->txqs[j];
@@ -1445,11 +1310,6 @@ static int idpf_txq_group_alloc(struct idpf_vport *vport, u16 num_txq)
if (!flow_sch_en)
continue;
- if (split) {
- q->stash = &stashes[j];
- hash_init(q->stash->sched_buf_hash);
- }
-
idpf_queue_set(FLOW_SCH_EN, q);
q->refillq = kzalloc(sizeof(*q->refillq), GFP_KERNEL);
@@ -1741,87 +1601,6 @@ static void idpf_tx_read_tstamp(struct idpf_tx_queue *txq, struct sk_buff *skb)
spin_unlock_bh(&tx_tstamp_caps->status_lock);
}
-/**
- * idpf_tx_clean_stashed_bufs - clean bufs that were stored for
- * out of order completions
- * @txq: queue to clean
- * @compl_tag: completion tag of packet to clean (from completion descriptor)
- * @cleaned: pointer to stats struct to track cleaned packets/bytes
- * @budget: Used to determine if we are in netpoll
- */
-static void idpf_tx_clean_stashed_bufs(struct idpf_tx_queue *txq,
- u16 compl_tag,
- struct libeth_sq_napi_stats *cleaned,
- int budget)
-{
- struct idpf_tx_stash *stash;
- struct hlist_node *tmp_buf;
- struct libeth_cq_pp cp = {
- .dev = txq->dev,
- .ss = cleaned,
- .napi = budget,
- };
-
- /* Buffer completion */
- hash_for_each_possible_safe(txq->stash->sched_buf_hash, stash, tmp_buf,
- hlist, compl_tag) {
- if (unlikely(idpf_tx_buf_compl_tag(&stash->buf) != compl_tag))
- continue;
-
- hash_del(&stash->hlist);
-
- if (stash->buf.type == LIBETH_SQE_SKB &&
- (skb_shinfo(stash->buf.skb)->tx_flags & SKBTX_IN_PROGRESS))
- idpf_tx_read_tstamp(txq, stash->buf.skb);
-
- libeth_tx_complete(&stash->buf, &cp);
-
- /* Push shadow buf back onto stack */
- idpf_buf_lifo_push(&txq->stash->buf_stack, stash);
- }
-}
-
-/**
- * idpf_stash_flow_sch_buffers - store buffer parameters info to be freed at a
- * later time (only relevant for flow scheduling mode)
- * @txq: Tx queue to clean
- * @tx_buf: buffer to store
- */
-static int idpf_stash_flow_sch_buffers(struct idpf_tx_queue *txq,
- struct idpf_tx_buf *tx_buf)
-{
- struct idpf_tx_stash *stash;
-
- if (unlikely(tx_buf->type <= LIBETH_SQE_CTX))
- return 0;
-
- stash = idpf_buf_lifo_pop(&txq->stash->buf_stack);
- if (unlikely(!stash)) {
- net_err_ratelimited("%s: No out-of-order TX buffers left!\n",
- netdev_name(txq->netdev));
-
- return -ENOMEM;
- }
-
- /* Store buffer params in shadow buffer */
- stash->buf.skb = tx_buf->skb;
- stash->buf.bytes = tx_buf->bytes;
- stash->buf.packets = tx_buf->packets;
- stash->buf.type = tx_buf->type;
- stash->buf.nr_frags = tx_buf->nr_frags;
- dma_unmap_addr_set(&stash->buf, dma, dma_unmap_addr(tx_buf, dma));
- dma_unmap_len_set(&stash->buf, len, dma_unmap_len(tx_buf, len));
- idpf_tx_buf_compl_tag(&stash->buf) = idpf_tx_buf_compl_tag(tx_buf);
-
- /* Add buffer to buf_hash table to be freed later */
- hash_add(txq->stash->sched_buf_hash, &stash->hlist,
- idpf_tx_buf_compl_tag(&stash->buf));
-
- tx_buf->type = LIBETH_SQE_EMPTY;
-
- return 0;
-}
-
#define idpf_tx_splitq_clean_bump_ntc(txq, ntc, desc, buf) \
do { \
if (unlikely(++(ntc) == (txq)->desc_count)) { \
@@ -1849,14 +1628,8 @@ do { \
* Separate packet completion events will be reported on the completion queue,
* and the buffers will be cleaned separately. The stats are not updated from
* this function when using flow-based scheduling.
- *
- * Furthermore, in flow scheduling mode, check to make sure there are enough
- * reserve buffers to stash the packet. If there are not, return early, which
- * will leave next_to_clean pointing to the packet that failed to be stashed.
- *
- * Return: false in the scenario above, true otherwise.
*/
-static bool idpf_tx_splitq_clean(struct idpf_tx_queue *tx_q, u16 end,
+static void idpf_tx_splitq_clean(struct idpf_tx_queue *tx_q, u16 end,
int napi_budget,
struct libeth_sq_napi_stats *cleaned,
bool descs_only)
@@ -1870,12 +1643,11 @@ static bool idpf_tx_splitq_clean(struct idpf_tx_queue *tx_q, u16 end,
.napi = napi_budget,
};
struct idpf_tx_buf *tx_buf;
- bool clean_complete = true;
if (descs_only) {
/* Bump ring index to mark as cleaned. */
tx_q->next_to_clean = end;
- return true;
+ return;
}
tx_desc = &tx_q->flex_tx[ntc];
@@ -1896,53 +1668,24 @@ static bool idpf_tx_splitq_clean(struct idpf_tx_queue *tx_q, u16 end,
break;
eop_idx = tx_buf->rs_idx;
+ libeth_tx_complete(tx_buf, &cp);
- if (descs_only) {
- if (IDPF_TX_BUF_RSV_UNUSED(tx_q) < tx_buf->nr_frags) {
- clean_complete = false;
- goto tx_splitq_clean_out;
- }
-
- idpf_stash_flow_sch_buffers(tx_q, tx_buf);
+ /* unmap remaining buffers */
+ while (ntc != eop_idx) {
+ idpf_tx_splitq_clean_bump_ntc(tx_q, ntc,
+ tx_desc, tx_buf);
- while (ntc != eop_idx) {
- idpf_tx_splitq_clean_bump_ntc(tx_q, ntc,
- tx_desc, tx_buf);
- idpf_stash_flow_sch_buffers(tx_q, tx_buf);
- }
- } else {
+ /* unmap any remaining paged data */
libeth_tx_complete(tx_buf, &cp);
-
- /* unmap remaining buffers */
- while (ntc != eop_idx) {
- idpf_tx_splitq_clean_bump_ntc(tx_q, ntc,
- tx_desc, tx_buf);
-
- /* unmap any remaining paged data */
- libeth_tx_complete(tx_buf, &cp);
- }
}
fetch_next_txq_desc:
idpf_tx_splitq_clean_bump_ntc(tx_q, ntc, tx_desc, tx_buf);
}
-tx_splitq_clean_out:
tx_q->next_to_clean = ntc;
-
- return clean_complete;
}
-#define idpf_tx_clean_buf_ring_bump_ntc(txq, ntc, buf) \
-do { \
- (buf)++; \
- (ntc)++; \
- if (unlikely((ntc) == (txq)->desc_count)) { \
- buf = (txq)->tx_buf; \
- ntc = 0; \
- } \
-} while (0)
-
/**
* idpf_tx_clean_bufs - clean flow scheduling TX queue buffers
* @txq: queue to clean
@@ -1953,7 +1696,7 @@ do { \
* Clean all buffers associated with the packet starting at buf_id. Returns the
* byte/segment count for the cleaned packet.
*/
-static bool idpf_tx_clean_bufs(struct idpf_tx_queue *txq, u32 buf_id,
+static void idpf_tx_clean_bufs(struct idpf_tx_queue *txq, u32 buf_id,
struct libeth_sq_napi_stats *cleaned,
int budget)
{
@@ -1980,8 +1723,6 @@ static bool idpf_tx_clean_bufs(struct idpf_tx_queue *txq, u32 buf_id,
libeth_tx_complete(tx_buf, &cp);
idpf_post_buf_refill(txq->refillq, buf_id);
}
-
- return true;
}
/**
@@ -2000,22 +1741,17 @@ static void idpf_tx_handle_rs_completion(struct idpf_tx_queue *txq,
struct libeth_sq_napi_stats *cleaned,
int budget)
{
- u16 compl_tag;
+ /* RS completion contains queue head for queue based scheduling or
+ * completion tag for flow based scheduling.
+ */
+ u16 rs_compl_val = le16_to_cpu(desc->q_head_compl_tag.q_head);
if (!idpf_queue_has(FLOW_SCH_EN, txq)) {
- u16 head = le16_to_cpu(desc->q_head_compl_tag.q_head);
-
- idpf_tx_splitq_clean(txq, head, budget, cleaned, false);
+ idpf_tx_splitq_clean(txq, rs_compl_val, budget, cleaned, false);
return;
}
- compl_tag = le16_to_cpu(desc->q_head_compl_tag.compl_tag);
-
- /* If we didn't clean anything on the ring, this packet must be
- * in the hash table. Go clean it there.
- */
- if (!idpf_tx_clean_bufs(txq, compl_tag, cleaned, budget))
- idpf_tx_clean_stashed_bufs(txq, compl_tag, cleaned, budget);
+ idpf_tx_clean_bufs(txq, rs_compl_val, cleaned, budget);
}
/**
@@ -2132,8 +1868,7 @@ static bool idpf_tx_clean_complq(struct idpf_compl_queue *complq, int budget,
/* Update BQL */
nq = netdev_get_tx_queue(tx_q->netdev, tx_q->idx);
- dont_wake = !complq_ok || IDPF_TX_BUF_RSV_LOW(tx_q) ||
- np->state != __IDPF_VPORT_UP ||
+ dont_wake = !complq_ok || np->state != __IDPF_VPORT_UP ||
!netif_carrier_ok(tx_q->netdev);
/* Check if the TXQ needs to and can be restarted */
__netif_txq_completed_wake(nq, tx_q->cleaned_pkts, tx_q->cleaned_bytes,
@@ -2204,7 +1939,6 @@ static int idpf_txq_has_room(struct idpf_tx_queue *tx_q, u32 descs_needed,
if (IDPF_DESC_UNUSED(tx_q) < descs_needed ||
IDPF_TX_COMPLQ_PENDING(tx_q->txq_grp) >
IDPF_TX_COMPLQ_OVERFLOW_THRESH(tx_q->txq_grp->complq) ||
- IDPF_TX_BUF_RSV_LOW(tx_q) ||
idpf_tx_splitq_get_free_bufs(tx_q->refillq) < bufs_needed)
return 0;
return 1;
@@ -2328,10 +2062,8 @@ static unsigned int idpf_tx_splitq_bump_ntu(struct idpf_tx_queue *txq, u16 ntu)
{
ntu++;
- if (ntu == txq->desc_count) {
+ if (ntu == txq->desc_count)
ntu = 0;
- txq->compl_tag_cur_gen = IDPF_TX_ADJ_COMPL_TAG_GEN(txq);
- }
return ntu;
}
@@ -2513,8 +2245,6 @@ static void idpf_tx_splitq_map(struct idpf_tx_queue *tx_q,
if (unlikely(++i == tx_q->desc_count)) {
tx_desc = &tx_q->flex_tx[0];
i = 0;
- tx_q->compl_tag_cur_gen =
- IDPF_TX_ADJ_COMPL_TAG_GEN(tx_q);
} else {
tx_desc++;
}
@@ -2545,7 +2275,6 @@ static void idpf_tx_splitq_map(struct idpf_tx_queue *tx_q,
if (unlikely(++i == tx_q->desc_count)) {
tx_desc = &tx_q->flex_tx[0];
i = 0;
- tx_q->compl_tag_cur_gen = IDPF_TX_ADJ_COMPL_TAG_GEN(tx_q);
} else {
tx_desc++;
}
@@ -2979,10 +2708,9 @@ static netdev_tx_t idpf_tx_splitq_frame(struct sk_buff *skb,
tx_params.dtype = IDPF_TX_DESC_DTYPE_FLEX_FLOW_SCHE;
tx_params.eop_cmd = IDPF_TXD_FLEX_FLOW_CMD_EOP;
- /* Set the RE bit to catch any packets that may have not been
- * stashed during RS completion cleaning. MIN_GAP is set to
- * MIN_RING size to ensure it will be set at least once each
- * time around the ring.
+ /* Set the RE bit to periodically "clean" the descriptor ring.
+ * MIN_GAP is set to MIN_RING size to ensure it will be set at
+ * least once each time around the ring.
*/
if (idpf_tx_splitq_need_re(tx_q)) {
tx_params.eop_cmd |= IDPF_TXD_FLEX_FLOW_CMD_RE;
diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
index 36020db47880..256aa3a10463 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h
+++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
@@ -117,10 +117,6 @@ do { \
((((txq)->next_to_clean > (txq)->next_to_use) ? 0 : (txq)->desc_count) + \
(txq)->next_to_clean - (txq)->next_to_use - 1)
-#define IDPF_TX_BUF_RSV_UNUSED(txq) ((txq)->stash->buf_stack.top)
-#define IDPF_TX_BUF_RSV_LOW(txq) (IDPF_TX_BUF_RSV_UNUSED(txq) < \
- (txq)->desc_count >> 2)
-
#define IDPF_TX_COMPLQ_OVERFLOW_THRESH(txcq) ((txcq)->desc_count >> 1)
/* Determine the absolute number of completions pending, i.e. the number of
* completions that are expected to arrive on the TX completion queue.
@@ -130,12 +126,6 @@ do { \
0 : U32_MAX) + \
(txq)->num_completions_pending - (txq)->complq->num_completions)
-#define IDPF_TX_SPLITQ_COMPL_TAG_WIDTH 16
-/* Adjust the generation for the completion tag and wrap if necessary */
-#define IDPF_TX_ADJ_COMPL_TAG_GEN(txq) \
- ((++(txq)->compl_tag_cur_gen) >= (txq)->compl_tag_gen_max ? \
- 0 : (txq)->compl_tag_cur_gen)
-
#define IDPF_TXBUF_NULL U32_MAX
#define IDPF_TXD_LAST_DESC_CMD (IDPF_TX_DESC_CMD_EOP | IDPF_TX_DESC_CMD_RS)
@@ -153,18 +143,6 @@ union idpf_tx_flex_desc {
#define idpf_tx_buf libeth_sqe
-/**
- * struct idpf_buf_lifo - LIFO for managing OOO completions
- * @top: Used to know how many buffers are left
- * @size: Total size of LIFO
- * @bufs: Backing array
- */
-struct idpf_buf_lifo {
- u16 top;
- u16 size;
- struct idpf_tx_stash **bufs;
-};
-
/**
* struct idpf_tx_offload_params - Offload parameters for a given packet
* @tx_flags: Feature flags enabled for this packet
@@ -475,17 +453,6 @@ struct idpf_tx_queue_stats {
#define IDPF_ITR_IDX_SPACING(spacing, dflt) (spacing ? spacing : dflt)
#define IDPF_DIM_DEFAULT_PROFILE_IX 1
-/**
- * struct idpf_txq_stash - Tx buffer stash for Flow-based scheduling mode
- * @buf_stack: Stack of empty buffers to store buffer info for out of order
- * buffer completions. See struct idpf_buf_lifo
- * @sched_buf_hash: Hash table to store buffers
- */
-struct idpf_txq_stash {
- struct idpf_buf_lifo buf_stack;
- DECLARE_HASHTABLE(sched_buf_hash, 12);
-} ____cacheline_aligned;
-
/**
* struct idpf_rx_queue - software structure representing a receive queue
* @rx: universal receive descriptor array
@@ -630,11 +597,7 @@ libeth_cacheline_set_assert(struct idpf_rx_queue, 64,
* only once at the end of the cleaning routine.
* @clean_budget: singleq only, queue cleaning budget
* @cleaned_pkts: Number of packets cleaned for the above said case
- * @stash: Tx buffer stash for Flow-based scheduling mode
* @refillq: Pointer to refill queue
- * @compl_tag_bufid_m: Completion tag buffer id mask
- * @compl_tag_cur_gen: Used to keep track of current completion tag generation
- * @compl_tag_gen_max: To determine when compl_tag_cur_gen should be reset
* @cached_tstamp_caps: Tx timestamp capabilities negotiated with the CP
* @tstamp_task: Work that handles Tx timestamp read
* @stats_sync: See struct u64_stats_sync
@@ -665,7 +628,6 @@ struct idpf_tx_queue {
u16 desc_count;
u16 tx_min_pkt_len;
- u16 compl_tag_gen_s;
struct net_device *netdev;
__cacheline_group_end_aligned(read_mostly);
@@ -682,13 +644,8 @@ struct idpf_tx_queue {
};
u16 cleaned_pkts;
- struct idpf_txq_stash *stash;
struct idpf_sw_queue *refillq;
- u16 compl_tag_bufid_m;
- u16 compl_tag_cur_gen;
- u16 compl_tag_gen_max;
-
struct idpf_ptp_vport_tx_tstamp_caps *cached_tstamp_caps;
struct work_struct *tstamp_task;
@@ -706,7 +663,7 @@ struct idpf_tx_queue {
__cacheline_group_end_aligned(cold);
};
libeth_cacheline_set_assert(struct idpf_tx_queue, 64,
- 120 + sizeof(struct u64_stats_sync),
+ 104 + sizeof(struct u64_stats_sync),
32);
/**
@@ -917,7 +874,6 @@ struct idpf_rxq_group {
* @vport: Vport back pointer
* @num_txq: Number of TX queues associated
* @txqs: Array of TX queue pointers
- * @stashes: array of OOO stashes for the queues
* @complq: Associated completion queue pointer, split queue only
* @num_completions_pending: Total number of completions pending for the
* completion queue, acculumated for all TX queues
@@ -932,7 +888,6 @@ struct idpf_txq_group {
u16 num_txq;
struct idpf_tx_queue *txqs[IDPF_LARGE_MAX_Q];
- struct idpf_txq_stash *stashes;
struct idpf_compl_queue *complq;
--
2.39.2
^ permalink raw reply related [flat|nested] 20+ messages in thread
* RE: [Intel-wired-lan] [PATCH iwl-net v3 6/6] idpf: remove obsolete stashing code
2025-07-25 18:42 ` [Intel-wired-lan] [PATCH iwl-net v3 6/6] idpf: remove obsolete stashing code Joshua Hay
@ 2025-07-28 9:04 ` Loktionov, Aleksandr
2025-08-12 16:45 ` Salin, Samuel
1 sibling, 0 replies; 20+ messages in thread
From: Loktionov, Aleksandr @ 2025-07-28 9:04 UTC (permalink / raw)
To: Hay, Joshua A, intel-wired-lan@lists.osuosl.org
Cc: netdev@vger.kernel.org, Hay, Joshua A, Chittim, Madhu
> -----Original Message-----
> From: Intel-wired-lan <intel-wired-lan-bounces@osuosl.org> On Behalf
> Of Joshua Hay
> Sent: Friday, July 25, 2025 8:42 PM
> To: intel-wired-lan@lists.osuosl.org
> Cc: netdev@vger.kernel.org; Hay, Joshua A <joshua.a.hay@intel.com>;
> Chittim, Madhu <madhu.chittim@intel.com>
> Subject: [Intel-wired-lan] [PATCH iwl-net v3 6/6] idpf: remove
> obsolete stashing code
>
> With the new Tx buffer management scheme, there is no need for all of
> the stashing mechanisms, the hash table, the reserve buffer stack,
> etc.
> Remove all of that.
>
> Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
> Reviewed-by: Madhu Chittim <madhu.chittim@intel.com>
Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com>
> ---
> v3: update comment format
> ---
> drivers/net/ethernet/intel/idpf/idpf_txrx.c | 314 ++-----------------
> - drivers/net/ethernet/intel/idpf/idpf_txrx.h | 47 +--
> 2 files changed, 22 insertions(+), 339 deletions(-)
>
> diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> index 6563d5831a23..a12cfad566a7 100644
> --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> @@ -8,48 +8,12 @@
> #include "idpf_ptp.h"
> #include "idpf_virtchnl.h"
>
> -struct idpf_tx_stash {
> - struct hlist_node hlist;
> - struct libeth_sqe buf;
> -};
> -
> #define idpf_tx_buf_next(buf) (*(u32 *)&(buf)->priv)
> -#define idpf_tx_buf_compl_tag(buf) (*(u32 *)&(buf)->priv)
> LIBETH_SQE_CHECK_PRIV(u32);
...
> @@ -932,7 +888,6 @@ struct idpf_txq_group {
>
> u16 num_txq;
> struct idpf_tx_queue *txqs[IDPF_LARGE_MAX_Q];
> - struct idpf_txq_stash *stashes;
>
> struct idpf_compl_queue *complq;
>
> --
> 2.39.2
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Intel-wired-lan] [PATCH iwl-net v3 1/6] idpf: add support for Tx refillqs in flow scheduling mode
2025-07-25 18:42 ` [Intel-wired-lan] [PATCH iwl-net v3 1/6] idpf: add support for Tx refillqs in flow scheduling mode Joshua Hay
@ 2025-07-28 17:06 ` Paul Menzel
2025-07-29 17:15 ` Hay, Joshua A
2025-08-12 16:45 ` Salin, Samuel
1 sibling, 1 reply; 20+ messages in thread
From: Paul Menzel @ 2025-07-28 17:06 UTC (permalink / raw)
To: Joshua Hay; +Cc: intel-wired-lan, netdev, Madhu Chittim
Dear Joshua,
Thank you for your patch.
Am 25.07.25 um 20:42 schrieb Joshua Hay:
> In certain production environments, it is possible for completion tags
> to collide, meaning N packets with the same completion tag are in flight
> at the same time. In this environment, any given Tx queue is effectively
> used to send both slower traffic and higher throughput traffic
> simultaneously. This is the result of a customer's specific
> configuration in the device pipeline, the details of which Intel cannot
> provide. This configuration results in a small number of out-of-order
> completions, i.e., a small number of packets in flight. The existing
> guardrails in the driver only protect against a large number of packets
> in flight. The slower flow completions are delayed which causes the
> out-of-order completions. The fast flow will continue sending traffic
> and generating tags. Because tags are generated on the fly, the fast
> flow eventually uses the same tag for a packet that is still in flight
> from the slower flow. The driver has no idea which packet it should
> clean when it processes the completion with that tag, but it will look
> for the packet on the buffer ring before the hash table. If the slower
> flow packet completion is processed first, it will end up cleaning the
> fast flow packet on the ring prematurely. This leaves the descriptor
> ring in a bad state resulting in a crashes or Tx timeout.
“in a crash” or just “crashes” wtih no article.
> In summary, generating a tag when a packet is sent can lead to the same
> tag being associated with multiple packets. This can lead to resource
> leaks, crashes, and/or Tx timeouts.
>
> Before we can replace the tag generation, we need a new mechanism for
> the send path to know what tag to use next. The driver will allocate and
> initialize a refillq for each TxQ with all of the possible free tag
> values. During send, the driver grabs the next free tag from the refillq
> from next_to_clean. While cleaning the packet, the clean routine posts
> the tag back to the refillq's next_to_use to indicate that it is now
> free to use.
>
> This mechanism works exactly the same way as the existing Rx refill
> queues, which post the cleaned buffer IDs back to the buffer queue to be
> reposted to HW. Since we're using the refillqs for both Rx and Tx now,
> genercize some of the existing refillq support.
gener*i*cize
> Note: the refillqs will not be used yet. This is only demonstrating how
> they will be used to pass free tags back to the send path.
>
> Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
> Reviewed-by: Madhu Chittim <madhu.chittim@intel.com>
> ---
> v2:
> - reorder refillq init logic to reduce indentation
> - don't drop skb if get free bufid fails, increment busy counter
> - add missing unlikely
> ---
> drivers/net/ethernet/intel/idpf/idpf_txrx.c | 93 +++++++++++++++++++--
> drivers/net/ethernet/intel/idpf/idpf_txrx.h | 8 +-
> 2 files changed, 91 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> index 5cf440e09d0a..d5908126163d 100644
> --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> @@ -139,6 +139,9 @@ static void idpf_tx_desc_rel(struct idpf_tx_queue *txq)
> if (!txq->desc_ring)
> return;
>
> + if (txq->refillq)
> + kfree(txq->refillq->ring);
> +
> dmam_free_coherent(txq->dev, txq->size, txq->desc_ring, txq->dma);
> txq->desc_ring = NULL;
> txq->next_to_use = 0;
> @@ -244,6 +247,7 @@ static int idpf_tx_desc_alloc(const struct idpf_vport *vport,
> struct idpf_tx_queue *tx_q)
> {
> struct device *dev = tx_q->dev;
> + struct idpf_sw_queue *refillq;
> int err;
>
> err = idpf_tx_buf_alloc_all(tx_q);
> @@ -267,6 +271,29 @@ static int idpf_tx_desc_alloc(const struct idpf_vport *vport,
> tx_q->next_to_clean = 0;
> idpf_queue_set(GEN_CHK, tx_q);
>
> + if (!idpf_queue_has(FLOW_SCH_EN, tx_q))
> + return 0;
> +
> + refillq = tx_q->refillq;
> + refillq->desc_count = tx_q->desc_count;
> + refillq->ring = kcalloc(refillq->desc_count, sizeof(u32),
> + GFP_KERNEL);
> + if (!refillq->ring) {
> + err = -ENOMEM;
> + goto err_alloc;
> + }
> +
> + for (u32 i = 0; i < refillq->desc_count; i++)
No need to limit the length, as far as I can see.
> + refillq->ring[i] =
> + FIELD_PREP(IDPF_RFL_BI_BUFID_M, i) |
> + FIELD_PREP(IDPF_RFL_BI_GEN_M,
> + idpf_queue_has(GEN_CHK, refillq));
> +
> + /* Go ahead and flip the GEN bit since this counts as filling
> + * up the ring, i.e. we already ring wrapped.
> + */
> + idpf_queue_change(GEN_CHK, refillq);
> +
> return 0;
>
> err_alloc:
> @@ -603,18 +630,18 @@ static int idpf_rx_hdr_buf_alloc_all(struct idpf_buf_queue *bufq)
> }
>
> /**
> - * idpf_rx_post_buf_refill - Post buffer id to refill queue
> + * idpf_post_buf_refill - Post buffer id to refill queue
> * @refillq: refill queue to post to
> * @buf_id: buffer id to post
> */
> -static void idpf_rx_post_buf_refill(struct idpf_sw_queue *refillq, u16 buf_id)
> +static void idpf_post_buf_refill(struct idpf_sw_queue *refillq, u16 buf_id)
> {
> u32 nta = refillq->next_to_use;
>
> /* store the buffer ID and the SW maintained GEN bit to the refillq */
> refillq->ring[nta] =
> - FIELD_PREP(IDPF_RX_BI_BUFID_M, buf_id) |
> - FIELD_PREP(IDPF_RX_BI_GEN_M,
> + FIELD_PREP(IDPF_RFL_BI_BUFID_M, buf_id) |
> + FIELD_PREP(IDPF_RFL_BI_GEN_M,
> idpf_queue_has(GEN_CHK, refillq));
>
> if (unlikely(++nta == refillq->desc_count)) {
> @@ -995,6 +1022,11 @@ static void idpf_txq_group_rel(struct idpf_vport *vport)
> struct idpf_txq_group *txq_grp = &vport->txq_grps[i];
>
> for (j = 0; j < txq_grp->num_txq; j++) {
> + if (flow_sch_en) {
> + kfree(txq_grp->txqs[j]->refillq);
> + txq_grp->txqs[j]->refillq = NULL;
> + }
> +
> kfree(txq_grp->txqs[j]);
> txq_grp->txqs[j] = NULL;
> }
> @@ -1414,6 +1446,13 @@ static int idpf_txq_group_alloc(struct idpf_vport *vport, u16 num_txq)
> }
>
> idpf_queue_set(FLOW_SCH_EN, q);
> +
> + q->refillq = kzalloc(sizeof(*q->refillq), GFP_KERNEL);
> + if (!q->refillq)
> + goto err_alloc;
> +
> + idpf_queue_set(GEN_CHK, q->refillq);
> + idpf_queue_set(RFL_GEN_CHK, q->refillq);
> }
>
> if (!split)
> @@ -2005,6 +2044,8 @@ static void idpf_tx_handle_rs_completion(struct idpf_tx_queue *txq,
>
> compl_tag = le16_to_cpu(desc->q_head_compl_tag.compl_tag);
>
> + idpf_post_buf_refill(txq->refillq, compl_tag);
> +
> /* If we didn't clean anything on the ring, this packet must be
> * in the hash table. Go clean it there.
> */
> @@ -2364,6 +2405,37 @@ static unsigned int idpf_tx_splitq_bump_ntu(struct idpf_tx_queue *txq, u16 ntu)
> return ntu;
> }
>
> +/**
> + * idpf_tx_get_free_buf_id - get a free buffer ID from the refill queue
> + * @refillq: refill queue to get buffer ID from
> + * @buf_id: return buffer ID
> + *
> + * Return: true if a buffer ID was found, false if not
> + */
> +static bool idpf_tx_get_free_buf_id(struct idpf_sw_queue *refillq,
> + u16 *buf_id)
> +{
> + u16 ntc = refillq->next_to_clean;
Hmm, why not `u32`?
struct idpf_sw_queue {
__cacheline_group_begin_aligned(read_mostly);
u32 *ring;
DECLARE_BITMAP(flags, __IDPF_Q_FLAGS_NBITS);
u32 desc_count;
__cacheline_group_end_aligned(read_mostly);
__cacheline_group_begin_aligned(read_write);
u32 next_to_use;
u32 next_to_clean;
__cacheline_group_end_aligned(read_write);
};
Kind regards,
Paul Menzel
> + u32 refill_desc;
> +
> + refill_desc = refillq->ring[ntc];
> +
> + if (unlikely(idpf_queue_has(RFL_GEN_CHK, refillq) !=
> + !!(refill_desc & IDPF_RFL_BI_GEN_M)))
> + return false;
> +
> + *buf_id = FIELD_GET(IDPF_RFL_BI_BUFID_M, refill_desc);
> +
> + if (unlikely(++ntc == refillq->desc_count)) {
> + idpf_queue_change(RFL_GEN_CHK, refillq);
> + ntc = 0;
> + }
> +
> + refillq->next_to_clean = ntc;
> +
> + return true;
> +}
> +
> /**
> * idpf_tx_splitq_map - Build the Tx flex descriptor
> * @tx_q: queue to send buffer on
> @@ -2912,6 +2984,13 @@ static netdev_tx_t idpf_tx_splitq_frame(struct sk_buff *skb,
> }
>
> if (idpf_queue_has(FLOW_SCH_EN, tx_q)) {
> + if (unlikely(!idpf_tx_get_free_buf_id(tx_q->refillq,
> + &tx_params.compl_tag))) {
> + u64_stats_update_begin(&tx_q->stats_sync);
> + u64_stats_inc(&tx_q->q_stats.q_busy);
> + u64_stats_update_end(&tx_q->stats_sync);
> + }
> +
> tx_params.dtype = IDPF_TX_DESC_DTYPE_FLEX_FLOW_SCHE;
> tx_params.eop_cmd = IDPF_TXD_FLEX_FLOW_CMD_EOP;
> /* Set the RE bit to catch any packets that may have not been
> @@ -3464,7 +3543,7 @@ static int idpf_rx_splitq_clean(struct idpf_rx_queue *rxq, int budget)
> skip_data:
> rx_buf->page = NULL;
>
> - idpf_rx_post_buf_refill(refillq, buf_id);
> + idpf_post_buf_refill(refillq, buf_id);
> IDPF_RX_BUMP_NTC(rxq, ntc);
>
> /* skip if it is non EOP desc */
> @@ -3572,10 +3651,10 @@ static void idpf_rx_clean_refillq(struct idpf_buf_queue *bufq,
> bool failure;
>
> if (idpf_queue_has(RFL_GEN_CHK, refillq) !=
> - !!(refill_desc & IDPF_RX_BI_GEN_M))
> + !!(refill_desc & IDPF_RFL_BI_GEN_M))
> break;
>
> - buf_id = FIELD_GET(IDPF_RX_BI_BUFID_M, refill_desc);
> + buf_id = FIELD_GET(IDPF_RFL_BI_BUFID_M, refill_desc);
> failure = idpf_rx_update_bufq_desc(bufq, buf_id, buf_desc);
> if (failure)
> break;
> diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
> index 36a0f828a6f8..6924bee6ff5b 100644
> --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h
> +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
> @@ -107,8 +107,8 @@ do { \
> */
> #define IDPF_TX_SPLITQ_RE_MIN_GAP 64
>
> -#define IDPF_RX_BI_GEN_M BIT(16)
> -#define IDPF_RX_BI_BUFID_M GENMASK(15, 0)
> +#define IDPF_RFL_BI_GEN_M BIT(16)
> +#define IDPF_RFL_BI_BUFID_M GENMASK(15, 0)
>
> #define IDPF_RXD_EOF_SPLITQ VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_EOF_M
> #define IDPF_RXD_EOF_SINGLEQ VIRTCHNL2_RX_BASE_DESC_STATUS_EOF_M
> @@ -621,6 +621,7 @@ libeth_cacheline_set_assert(struct idpf_rx_queue, 64,
> * @cleaned_pkts: Number of packets cleaned for the above said case
> * @tx_max_bufs: Max buffers that can be transmitted with scatter-gather
> * @stash: Tx buffer stash for Flow-based scheduling mode
> + * @refillq: Pointer to refill queue
> * @compl_tag_bufid_m: Completion tag buffer id mask
> * @compl_tag_cur_gen: Used to keep track of current completion tag generation
> * @compl_tag_gen_max: To determine when compl_tag_cur_gen should be reset
> @@ -670,6 +671,7 @@ struct idpf_tx_queue {
>
> u16 tx_max_bufs;
> struct idpf_txq_stash *stash;
> + struct idpf_sw_queue *refillq;
>
> u16 compl_tag_bufid_m;
> u16 compl_tag_cur_gen;
> @@ -691,7 +693,7 @@ struct idpf_tx_queue {
> __cacheline_group_end_aligned(cold);
> };
> libeth_cacheline_set_assert(struct idpf_tx_queue, 64,
> - 112 + sizeof(struct u64_stats_sync),
> + 120 + sizeof(struct u64_stats_sync),
> 24);
>
> /**
^ permalink raw reply [flat|nested] 20+ messages in thread
* RE: [Intel-wired-lan] [PATCH iwl-net v3 1/6] idpf: add support for Tx refillqs in flow scheduling mode
2025-07-28 17:06 ` Paul Menzel
@ 2025-07-29 17:15 ` Hay, Joshua A
2025-07-29 22:53 ` Paul Menzel
0 siblings, 1 reply; 20+ messages in thread
From: Hay, Joshua A @ 2025-07-29 17:15 UTC (permalink / raw)
To: Paul Menzel
Cc: intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org,
Chittim, Madhu
> > In certain production environments, it is possible for completion tags
> > to collide, meaning N packets with the same completion tag are in flight
> > at the same time. In this environment, any given Tx queue is effectively
> > used to send both slower traffic and higher throughput traffic
> > simultaneously. This is the result of a customer's specific
> > configuration in the device pipeline, the details of which Intel cannot
> > provide. This configuration results in a small number of out-of-order
> > completions, i.e., a small number of packets in flight. The existing
> > guardrails in the driver only protect against a large number of packets
> > in flight. The slower flow completions are delayed which causes the
> > out-of-order completions. The fast flow will continue sending traffic
> > and generating tags. Because tags are generated on the fly, the fast
> > flow eventually uses the same tag for a packet that is still in flight
> > from the slower flow. The driver has no idea which packet it should
> > clean when it processes the completion with that tag, but it will look
> > for the packet on the buffer ring before the hash table. If the slower
> > flow packet completion is processed first, it will end up cleaning the
> > fast flow packet on the ring prematurely. This leaves the descriptor
> > ring in a bad state resulting in a crashes or Tx timeout.
>
> “in a crash” or just “crashes” wtih no article.
Will fix.
>
> > In summary, generating a tag when a packet is sent can lead to the same
> > tag being associated with multiple packets. This can lead to resource
> > leaks, crashes, and/or Tx timeouts.
> >
> > Before we can replace the tag generation, we need a new mechanism for
> > the send path to know what tag to use next. The driver will allocate and
> > initialize a refillq for each TxQ with all of the possible free tag
> > values. During send, the driver grabs the next free tag from the refillq
> > from next_to_clean. While cleaning the packet, the clean routine posts
> > the tag back to the refillq's next_to_use to indicate that it is now
> > free to use.
> >
> > This mechanism works exactly the same way as the existing Rx refill
> > queues, which post the cleaned buffer IDs back to the buffer queue to be
> > reposted to HW. Since we're using the refillqs for both Rx and Tx now,
> > genercize some of the existing refillq support.
>
> gener*i*cize
Will fix.
>
> > Note: the refillqs will not be used yet. This is only demonstrating how
> > they will be used to pass free tags back to the send path.
> >
> > Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
> > Reviewed-by: Madhu Chittim <madhu.chittim@intel.com>
> > ---
> > v2:
> > - reorder refillq init logic to reduce indentation
> > - don't drop skb if get free bufid fails, increment busy counter
> > - add missing unlikely
> > ---
> > drivers/net/ethernet/intel/idpf/idpf_txrx.c | 93 +++++++++++++++++++--
> > drivers/net/ethernet/intel/idpf/idpf_txrx.h | 8 +-
> > 2 files changed, 91 insertions(+), 10 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> > index 5cf440e09d0a..d5908126163d 100644
> > --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> > +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> > @@ -139,6 +139,9 @@ static void idpf_tx_desc_rel(struct idpf_tx_queue
> *txq)
> > if (!txq->desc_ring)
> > return;
> >
> > + if (txq->refillq)
> > + kfree(txq->refillq->ring);
> > +
> > dmam_free_coherent(txq->dev, txq->size, txq->desc_ring, txq->dma);
> > txq->desc_ring = NULL;
> > txq->next_to_use = 0;
> > @@ -244,6 +247,7 @@ static int idpf_tx_desc_alloc(const struct idpf_vport
> *vport,
> > struct idpf_tx_queue *tx_q)
> > {
> > struct device *dev = tx_q->dev;
> > + struct idpf_sw_queue *refillq;
> > int err;
> >
> > err = idpf_tx_buf_alloc_all(tx_q);
> > @@ -267,6 +271,29 @@ static int idpf_tx_desc_alloc(const struct
> idpf_vport *vport,
> > tx_q->next_to_clean = 0;
> > idpf_queue_set(GEN_CHK, tx_q);
> >
> > + if (!idpf_queue_has(FLOW_SCH_EN, tx_q))
> > + return 0;
> > +
> > + refillq = tx_q->refillq;
> > + refillq->desc_count = tx_q->desc_count;
> > + refillq->ring = kcalloc(refillq->desc_count, sizeof(u32),
> > + GFP_KERNEL);
> > + if (!refillq->ring) {
> > + err = -ENOMEM;
> > + goto err_alloc;
> > + }
> > +
> > + for (u32 i = 0; i < refillq->desc_count; i++)
>
> No need to limit the length, as far as I can see.
Apologies, I'm not sure I follow. Can you please clarify?
>
> > + refillq->ring[i] =
> > + FIELD_PREP(IDPF_RFL_BI_BUFID_M, i) |
> > + FIELD_PREP(IDPF_RFL_BI_GEN_M,
> > + idpf_queue_has(GEN_CHK, refillq));
> > +
> > + /* Go ahead and flip the GEN bit since this counts as filling
> > + * up the ring, i.e. we already ring wrapped.
> > + */
> > + idpf_queue_change(GEN_CHK, refillq);
> > +
> > return 0;
> >
> > err_alloc:
> > @@ -603,18 +630,18 @@ static int idpf_rx_hdr_buf_alloc_all(struct
> idpf_buf_queue *bufq)
> > }
> >
> > /**
> > - * idpf_rx_post_buf_refill - Post buffer id to refill queue
> > + * idpf_post_buf_refill - Post buffer id to refill queue
> > * @refillq: refill queue to post to
> > * @buf_id: buffer id to post
> > */
> > -static void idpf_rx_post_buf_refill(struct idpf_sw_queue *refillq, u16
> buf_id)
> > +static void idpf_post_buf_refill(struct idpf_sw_queue *refillq, u16 buf_id)
> > {
> > u32 nta = refillq->next_to_use;
> >
> > /* store the buffer ID and the SW maintained GEN bit to the refillq */
> > refillq->ring[nta] =
> > - FIELD_PREP(IDPF_RX_BI_BUFID_M, buf_id) |
> > - FIELD_PREP(IDPF_RX_BI_GEN_M,
> > + FIELD_PREP(IDPF_RFL_BI_BUFID_M, buf_id) |
> > + FIELD_PREP(IDPF_RFL_BI_GEN_M,
> > idpf_queue_has(GEN_CHK, refillq));
> >
> > if (unlikely(++nta == refillq->desc_count)) {
> > @@ -995,6 +1022,11 @@ static void idpf_txq_group_rel(struct idpf_vport
> *vport)
> > struct idpf_txq_group *txq_grp = &vport->txq_grps[i];
> >
> > for (j = 0; j < txq_grp->num_txq; j++) {
> > + if (flow_sch_en) {
> > + kfree(txq_grp->txqs[j]->refillq);
> > + txq_grp->txqs[j]->refillq = NULL;
> > + }
> > +
> > kfree(txq_grp->txqs[j]);
> > txq_grp->txqs[j] = NULL;
> > }
> > @@ -1414,6 +1446,13 @@ static int idpf_txq_group_alloc(struct idpf_vport
> *vport, u16 num_txq)
> > }
> >
> > idpf_queue_set(FLOW_SCH_EN, q);
> > +
> > + q->refillq = kzalloc(sizeof(*q->refillq), GFP_KERNEL);
> > + if (!q->refillq)
> > + goto err_alloc;
> > +
> > + idpf_queue_set(GEN_CHK, q->refillq);
> > + idpf_queue_set(RFL_GEN_CHK, q->refillq);
> > }
> >
> > if (!split)
> > @@ -2005,6 +2044,8 @@ static void idpf_tx_handle_rs_completion(struct
> idpf_tx_queue *txq,
> >
> > compl_tag = le16_to_cpu(desc->q_head_compl_tag.compl_tag);
> >
> > + idpf_post_buf_refill(txq->refillq, compl_tag);
> > +
> > /* If we didn't clean anything on the ring, this packet must be
> > * in the hash table. Go clean it there.
> > */
> > @@ -2364,6 +2405,37 @@ static unsigned int
> idpf_tx_splitq_bump_ntu(struct idpf_tx_queue *txq, u16 ntu)
> > return ntu;
> > }
> >
> > +/**
> > + * idpf_tx_get_free_buf_id - get a free buffer ID from the refill queue
> > + * @refillq: refill queue to get buffer ID from
> > + * @buf_id: return buffer ID
> > + *
> > + * Return: true if a buffer ID was found, false if not
> > + */
> > +static bool idpf_tx_get_free_buf_id(struct idpf_sw_queue *refillq,
> > + u16 *buf_id)
> > +{
> > + u16 ntc = refillq->next_to_clean;
>
> Hmm, why not `u32`?
Ah, good catch, will fix.
>
> struct idpf_sw_queue {
> __cacheline_group_begin_aligned(read_mostly);
> u32 *ring;
>
> DECLARE_BITMAP(flags, __IDPF_Q_FLAGS_NBITS);
> u32 desc_count;
> __cacheline_group_end_aligned(read_mostly);
>
> __cacheline_group_begin_aligned(read_write);
> u32 next_to_use;
> u32 next_to_clean;
> __cacheline_group_end_aligned(read_write);
> };
>
>
> Kind regards,
>
> Paul Menzel
Thanks!
Josh
>
>
> > + u32 refill_desc;
> > +
> > + refill_desc = refillq->ring[ntc];
> > +
> > + if (unlikely(idpf_queue_has(RFL_GEN_CHK, refillq) !=
> > + !!(refill_desc & IDPF_RFL_BI_GEN_M)))
> > + return false;
> > +
> > + *buf_id = FIELD_GET(IDPF_RFL_BI_BUFID_M, refill_desc);
> > +
> > + if (unlikely(++ntc == refillq->desc_count)) {
> > + idpf_queue_change(RFL_GEN_CHK, refillq);
> > + ntc = 0;
> > + }
> > +
> > + refillq->next_to_clean = ntc;
> > +
> > + return true;
> > +}
> > +
> > /**
> > * idpf_tx_splitq_map - Build the Tx flex descriptor
> > * @tx_q: queue to send buffer on
> > @@ -2912,6 +2984,13 @@ static netdev_tx_t idpf_tx_splitq_frame(struct
> sk_buff *skb,
> > }
> >
> > if (idpf_queue_has(FLOW_SCH_EN, tx_q)) {
> > + if (unlikely(!idpf_tx_get_free_buf_id(tx_q->refillq,
> > + &tx_params.compl_tag))) {
> > + u64_stats_update_begin(&tx_q->stats_sync);
> > + u64_stats_inc(&tx_q->q_stats.q_busy);
> > + u64_stats_update_end(&tx_q->stats_sync);
> > + }
> > +
> > tx_params.dtype = IDPF_TX_DESC_DTYPE_FLEX_FLOW_SCHE;
> > tx_params.eop_cmd = IDPF_TXD_FLEX_FLOW_CMD_EOP;
> > /* Set the RE bit to catch any packets that may have not been
> > @@ -3464,7 +3543,7 @@ static int idpf_rx_splitq_clean(struct
> idpf_rx_queue *rxq, int budget)
> > skip_data:
> > rx_buf->page = NULL;
> >
> > - idpf_rx_post_buf_refill(refillq, buf_id);
> > + idpf_post_buf_refill(refillq, buf_id);
> > IDPF_RX_BUMP_NTC(rxq, ntc);
> >
> > /* skip if it is non EOP desc */
> > @@ -3572,10 +3651,10 @@ static void idpf_rx_clean_refillq(struct
> idpf_buf_queue *bufq,
> > bool failure;
> >
> > if (idpf_queue_has(RFL_GEN_CHK, refillq) !=
> > - !!(refill_desc & IDPF_RX_BI_GEN_M))
> > + !!(refill_desc & IDPF_RFL_BI_GEN_M))
> > break;
> >
> > - buf_id = FIELD_GET(IDPF_RX_BI_BUFID_M, refill_desc);
> > + buf_id = FIELD_GET(IDPF_RFL_BI_BUFID_M, refill_desc);
> > failure = idpf_rx_update_bufq_desc(bufq, buf_id, buf_desc);
> > if (failure)
> > break;
> > diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h
> b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
> > index 36a0f828a6f8..6924bee6ff5b 100644
> > --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h
> > +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
> > @@ -107,8 +107,8 @@ do {
> \
> > */
> > #define IDPF_TX_SPLITQ_RE_MIN_GAP 64
> >
> > -#define IDPF_RX_BI_GEN_M BIT(16)
> > -#define IDPF_RX_BI_BUFID_M GENMASK(15, 0)
> > +#define IDPF_RFL_BI_GEN_M BIT(16)
> > +#define IDPF_RFL_BI_BUFID_M GENMASK(15, 0)
> >
> > #define IDPF_RXD_EOF_SPLITQ
> VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_EOF_M
> > #define IDPF_RXD_EOF_SINGLEQ
> VIRTCHNL2_RX_BASE_DESC_STATUS_EOF_M
> > @@ -621,6 +621,7 @@ libeth_cacheline_set_assert(struct idpf_rx_queue,
> 64,
> > * @cleaned_pkts: Number of packets cleaned for the above said case
> > * @tx_max_bufs: Max buffers that can be transmitted with scatter-gather
> > * @stash: Tx buffer stash for Flow-based scheduling mode
> > + * @refillq: Pointer to refill queue
> > * @compl_tag_bufid_m: Completion tag buffer id mask
> > * @compl_tag_cur_gen: Used to keep track of current completion tag
> generation
> > * @compl_tag_gen_max: To determine when compl_tag_cur_gen should
> be reset
> > @@ -670,6 +671,7 @@ struct idpf_tx_queue {
> >
> > u16 tx_max_bufs;
> > struct idpf_txq_stash *stash;
> > + struct idpf_sw_queue *refillq;
> >
> > u16 compl_tag_bufid_m;
> > u16 compl_tag_cur_gen;
> > @@ -691,7 +693,7 @@ struct idpf_tx_queue {
> > __cacheline_group_end_aligned(cold);
> > };
> > libeth_cacheline_set_assert(struct idpf_tx_queue, 64,
> > - 112 + sizeof(struct u64_stats_sync),
> > + 120 + sizeof(struct u64_stats_sync),
> > 24);
> >
> > /**
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Intel-wired-lan] [PATCH iwl-net v3 1/6] idpf: add support for Tx refillqs in flow scheduling mode
2025-07-29 17:15 ` Hay, Joshua A
@ 2025-07-29 22:53 ` Paul Menzel
2025-07-31 21:08 ` Hay, Joshua A
0 siblings, 1 reply; 20+ messages in thread
From: Paul Menzel @ 2025-07-29 22:53 UTC (permalink / raw)
To: Joshua A Hay; +Cc: intel-wired-lan, netdev, Madhu Chittim
Dear Joshua,
Thank you for your reply.
Am 29.07.25 um 19:15 schrieb Hay, Joshua A:
>>> In certain production environments, it is possible for completion tags
>>> to collide, meaning N packets with the same completion tag are in flight
>>> at the same time. In this environment, any given Tx queue is effectively
>>> used to send both slower traffic and higher throughput traffic
>>> simultaneously. This is the result of a customer's specific
>>> configuration in the device pipeline, the details of which Intel cannot
>>> provide. This configuration results in a small number of out-of-order
>>> completions, i.e., a small number of packets in flight. The existing
>>> guardrails in the driver only protect against a large number of packets
>>> in flight. The slower flow completions are delayed which causes the
>>> out-of-order completions. The fast flow will continue sending traffic
>>> and generating tags. Because tags are generated on the fly, the fast
>>> flow eventually uses the same tag for a packet that is still in flight
>>> from the slower flow. The driver has no idea which packet it should
>>> clean when it processes the completion with that tag, but it will look
>>> for the packet on the buffer ring before the hash table. If the slower
>>> flow packet completion is processed first, it will end up cleaning the
>>> fast flow packet on the ring prematurely. This leaves the descriptor
>>> ring in a bad state resulting in a crashes or Tx timeout.
>>
>> “in a crash” or just “crashes” wtih no article.
>
> Will fix.
>
>>
>>> In summary, generating a tag when a packet is sent can lead to the same
>>> tag being associated with multiple packets. This can lead to resource
>>> leaks, crashes, and/or Tx timeouts.
>>>
>>> Before we can replace the tag generation, we need a new mechanism for
>>> the send path to know what tag to use next. The driver will allocate and
>>> initialize a refillq for each TxQ with all of the possible free tag
>>> values. During send, the driver grabs the next free tag from the refillq
>>> from next_to_clean. While cleaning the packet, the clean routine posts
>>> the tag back to the refillq's next_to_use to indicate that it is now
>>> free to use.
>>>
>>> This mechanism works exactly the same way as the existing Rx refill
>>> queues, which post the cleaned buffer IDs back to the buffer queue to be
>>> reposted to HW. Since we're using the refillqs for both Rx and Tx now,
>>> genercize some of the existing refillq support.
>>
>> gener*i*cize
>
> Will fix.
>
>>
>>> Note: the refillqs will not be used yet. This is only demonstrating how
>>> they will be used to pass free tags back to the send path.
>>>
>>> Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
>>> Reviewed-by: Madhu Chittim <madhu.chittim@intel.com>
>>> ---
>>> v2:
>>> - reorder refillq init logic to reduce indentation
>>> - don't drop skb if get free bufid fails, increment busy counter
>>> - add missing unlikely
>>> ---
>>> drivers/net/ethernet/intel/idpf/idpf_txrx.c | 93 +++++++++++++++++++--
>>> drivers/net/ethernet/intel/idpf/idpf_txrx.h | 8 +-
>>> 2 files changed, 91 insertions(+), 10 deletions(-)
>>>
>>> diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
>>> index 5cf440e09d0a..d5908126163d 100644
>>> --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
>>> +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
[…]
>>> @@ -267,6 +271,29 @@ static int idpf_tx_desc_alloc(const struct idpf_vport *vport,
>>> tx_q->next_to_clean = 0;
>>> idpf_queue_set(GEN_CHK, tx_q);
>>>
>>> + if (!idpf_queue_has(FLOW_SCH_EN, tx_q))
>>> + return 0;
>>> +
>>> + refillq = tx_q->refillq;
>>> + refillq->desc_count = tx_q->desc_count;
>>> + refillq->ring = kcalloc(refillq->desc_count, sizeof(u32),
>>> + GFP_KERNEL);
>>> + if (!refillq->ring) {
>>> + err = -ENOMEM;
>>> + goto err_alloc;
>>> + }
>>> +
>>> + for (u32 i = 0; i < refillq->desc_count; i++)
>>
>> No need to limit the length, as far as I can see.
>
> Apologies, I'm not sure I follow. Can you please clarify?
Sorry, for being ambiguous. I meant, just use `unsigned int` as type for
the counting variable.
[…]
Kind regards,
Paul
PS: Just a note, that your client seems to have wrapped some of the code
lines.
^ permalink raw reply [flat|nested] 20+ messages in thread
* RE: [Intel-wired-lan] [PATCH iwl-net v3 1/6] idpf: add support for Tx refillqs in flow scheduling mode
2025-07-29 22:53 ` Paul Menzel
@ 2025-07-31 21:08 ` Hay, Joshua A
0 siblings, 0 replies; 20+ messages in thread
From: Hay, Joshua A @ 2025-07-31 21:08 UTC (permalink / raw)
To: Paul Menzel
Cc: intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org,
Chittim, Madhu
> >>> In certain production environments, it is possible for completion tags
> >>> to collide, meaning N packets with the same completion tag are in flight
> >>> at the same time. In this environment, any given Tx queue is effectively
> >>> used to send both slower traffic and higher throughput traffic
> >>> simultaneously. This is the result of a customer's specific
> >>> configuration in the device pipeline, the details of which Intel cannot
> >>> provide. This configuration results in a small number of out-of-order
> >>> completions, i.e., a small number of packets in flight. The existing
> >>> guardrails in the driver only protect against a large number of packets
> >>> in flight. The slower flow completions are delayed which causes the
> >>> out-of-order completions. The fast flow will continue sending traffic
> >>> and generating tags. Because tags are generated on the fly, the fast
> >>> flow eventually uses the same tag for a packet that is still in flight
> >>> from the slower flow. The driver has no idea which packet it should
> >>> clean when it processes the completion with that tag, but it will look
> >>> for the packet on the buffer ring before the hash table. If the slower
> >>> flow packet completion is processed first, it will end up cleaning the
> >>> fast flow packet on the ring prematurely. This leaves the descriptor
> >>> ring in a bad state resulting in a crashes or Tx timeout.
> >>
> >> “in a crash” or just “crashes” wtih no article.
> >
> > Will fix.
> >
> >>
> >>> In summary, generating a tag when a packet is sent can lead to the same
> >>> tag being associated with multiple packets. This can lead to resource
> >>> leaks, crashes, and/or Tx timeouts.
> >>>
> >>> Before we can replace the tag generation, we need a new mechanism for
> >>> the send path to know what tag to use next. The driver will allocate and
> >>> initialize a refillq for each TxQ with all of the possible free tag
> >>> values. During send, the driver grabs the next free tag from the refillq
> >>> from next_to_clean. While cleaning the packet, the clean routine posts
> >>> the tag back to the refillq's next_to_use to indicate that it is now
> >>> free to use.
> >>>
> >>> This mechanism works exactly the same way as the existing Rx refill
> >>> queues, which post the cleaned buffer IDs back to the buffer queue to be
> >>> reposted to HW. Since we're using the refillqs for both Rx and Tx now,
> >>> genercize some of the existing refillq support.
> >>
> >> gener*i*cize
> >
> > Will fix.
> >
> >>
> >>> Note: the refillqs will not be used yet. This is only demonstrating how
> >>> they will be used to pass free tags back to the send path.
> >>>
> >>> Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
> >>> Reviewed-by: Madhu Chittim <madhu.chittim@intel.com>
> >>> ---
> >>> v2:
> >>> - reorder refillq init logic to reduce indentation
> >>> - don't drop skb if get free bufid fails, increment busy counter
> >>> - add missing unlikely
> >>> ---
> >>> drivers/net/ethernet/intel/idpf/idpf_txrx.c | 93
> +++++++++++++++++++--
> >>> drivers/net/ethernet/intel/idpf/idpf_txrx.h | 8 +-
> >>> 2 files changed, 91 insertions(+), 10 deletions(-)
> >>>
> >>> diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> >>> index 5cf440e09d0a..d5908126163d 100644
> >>> --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> >>> +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
>
> […]
>
> >>> @@ -267,6 +271,29 @@ static int idpf_tx_desc_alloc(const struct
> idpf_vport *vport,
> >>> tx_q->next_to_clean = 0;
> >>> idpf_queue_set(GEN_CHK, tx_q);
> >>>
> >>> + if (!idpf_queue_has(FLOW_SCH_EN, tx_q))
> >>> + return 0;
> >>> +
> >>> + refillq = tx_q->refillq;
> >>> + refillq->desc_count = tx_q->desc_count;
> >>> + refillq->ring = kcalloc(refillq->desc_count, sizeof(u32),
> >>> + GFP_KERNEL);
> >>> + if (!refillq->ring) {
> >>> + err = -ENOMEM;
> >>> + goto err_alloc;
> >>> + }
> >>> +
> >>> + for (u32 i = 0; i < refillq->desc_count; i++)
> >>
> >> No need to limit the length, as far as I can see.
> >
> > Apologies, I'm not sure I follow. Can you please clarify?
>
> Sorry, for being ambiguous. I meant, just use `unsigned int` as type for
> the counting variable.
Ah, sure, will fix.
>
> […]
>
>
> Kind regards,
>
> Paul
>
>
> PS: Just a note, that your client seems to have wrapped some of the code
> lines.
Noted, I'll look into it. Thanks!
Josh
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Intel-wired-lan] [PATCH iwl-net v3 4/6] idpf: replace flow scheduling buffer ring with buffer pool
2025-07-25 18:42 ` [Intel-wired-lan] [PATCH iwl-net v3 4/6] idpf: replace flow scheduling buffer ring with buffer pool Joshua Hay
@ 2025-08-04 17:02 ` Alexander Lobakin
2025-08-05 22:40 ` Hay, Joshua A
2025-08-12 16:45 ` Salin, Samuel
1 sibling, 1 reply; 20+ messages in thread
From: Alexander Lobakin @ 2025-08-04 17:02 UTC (permalink / raw)
To: Joshua Hay
Cc: intel-wired-lan, netdev, Luigi Rizzo, Brian Vazquez,
Madhu Chittim, Aleksandr Loktionov
From: Joshua Hay <joshua.a.hay@intel.com>
Date: Fri, 25 Jul 2025 11:42:21 -0700
> Replace the TxQ buffer ring with one large pool/array of buffers (only
> for flow scheduling). This eliminates the tag generation and makes it
> impossible for a tag to be associated with more than one packet.
[...]
> -static bool idpf_tx_clean_buf_ring(struct idpf_tx_queue *txq, u16 compl_tag,
> - struct libeth_sq_napi_stats *cleaned,
> - int budget)
> +static bool idpf_tx_clean_bufs(struct idpf_tx_queue *txq, u32 buf_id,
> + struct libeth_sq_napi_stats *cleaned,
> + int budget)
> {
> - u16 idx = compl_tag & txq->compl_tag_bufid_m;
> struct idpf_tx_buf *tx_buf = NULL;
> struct libeth_cq_pp cp = {
> .dev = txq->dev,
> .ss = cleaned,
> .napi = budget,
> };
> - u16 ntc, orig_idx = idx;
> -
> - tx_buf = &txq->tx_buf[idx];
> -
> - if (unlikely(tx_buf->type <= LIBETH_SQE_CTX ||
> - idpf_tx_buf_compl_tag(tx_buf) != compl_tag))
> - return false;
>
> + tx_buf = &txq->tx_buf[buf_id];
> if (tx_buf->type == LIBETH_SQE_SKB) {
> if (skb_shinfo(tx_buf->skb)->tx_flags & SKBTX_IN_PROGRESS)
> idpf_tx_read_tstamp(txq, tx_buf->skb);
>
> libeth_tx_complete(tx_buf, &cp);
> + idpf_post_buf_refill(txq->refillq, buf_id);
> }
>
> - idpf_tx_clean_buf_ring_bump_ntc(txq, idx, tx_buf);
> + while (idpf_tx_buf_next(tx_buf) != IDPF_TXBUF_NULL) {
> + u16 buf_id = idpf_tx_buf_next(tx_buf);
This line adds a new -Wshadow warning. This function has an argument
named 'buf_id' and here you declare a variable with the same name.
While -Wshadow gets enabled only with W=2, it would be nice if you don't
introduce it anyway.
If I understand the code correctly, you can just remove that `u16` since
you don't use the former buf_id at this point anyway.
>
> - while (idpf_tx_buf_compl_tag(tx_buf) == compl_tag) {
> + tx_buf = &txq->tx_buf[buf_id];
> libeth_tx_complete(tx_buf, &cp);
> - idpf_tx_clean_buf_ring_bump_ntc(txq, idx, tx_buf);
> + idpf_post_buf_refill(txq->refillq, buf_id);
> }
Thanks,
Olek
^ permalink raw reply [flat|nested] 20+ messages in thread
* RE: [Intel-wired-lan] [PATCH iwl-net v3 4/6] idpf: replace flow scheduling buffer ring with buffer pool
2025-08-04 17:02 ` Alexander Lobakin
@ 2025-08-05 22:40 ` Hay, Joshua A
0 siblings, 0 replies; 20+ messages in thread
From: Hay, Joshua A @ 2025-08-05 22:40 UTC (permalink / raw)
To: Lobakin, Aleksander
Cc: intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org,
Luigi Rizzo, Brian Vazquez, Chittim, Madhu, Loktionov, Aleksandr
> > -static bool idpf_tx_clean_buf_ring(struct idpf_tx_queue *txq, u16
> compl_tag,
> > - struct libeth_sq_napi_stats *cleaned,
> > - int budget)
> > +static bool idpf_tx_clean_bufs(struct idpf_tx_queue *txq, u32 buf_id,
> > + struct libeth_sq_napi_stats *cleaned,
> > + int budget)
> > {
> > - u16 idx = compl_tag & txq->compl_tag_bufid_m;
> > struct idpf_tx_buf *tx_buf = NULL;
> > struct libeth_cq_pp cp = {
> > .dev = txq->dev,
> > .ss = cleaned,
> > .napi = budget,
> > };
> > - u16 ntc, orig_idx = idx;
> > -
> > - tx_buf = &txq->tx_buf[idx];
> > -
> > - if (unlikely(tx_buf->type <= LIBETH_SQE_CTX ||
> > - idpf_tx_buf_compl_tag(tx_buf) != compl_tag))
> > - return false;
> >
> > + tx_buf = &txq->tx_buf[buf_id];
> > if (tx_buf->type == LIBETH_SQE_SKB) {
> > if (skb_shinfo(tx_buf->skb)->tx_flags & SKBTX_IN_PROGRESS)
> > idpf_tx_read_tstamp(txq, tx_buf->skb);
> >
> > libeth_tx_complete(tx_buf, &cp);
> > + idpf_post_buf_refill(txq->refillq, buf_id);
> > }
> >
> > - idpf_tx_clean_buf_ring_bump_ntc(txq, idx, tx_buf);
> > + while (idpf_tx_buf_next(tx_buf) != IDPF_TXBUF_NULL) {
> > + u16 buf_id = idpf_tx_buf_next(tx_buf);
>
> This line adds a new -Wshadow warning. This function has an argument
> named 'buf_id' and here you declare a variable with the same name.
> While -Wshadow gets enabled only with W=2, it would be nice if you don't
> introduce it anyway.
> If I understand the code correctly, you can just remove that `u16` since
> you don't use the former buf_id at this point anyway.
Ah, good catch, will fix. Correct, the u16 can just be removed.
>
> >
> > - while (idpf_tx_buf_compl_tag(tx_buf) == compl_tag) {
> > + tx_buf = &txq->tx_buf[buf_id];
> > libeth_tx_complete(tx_buf, &cp);
> > - idpf_tx_clean_buf_ring_bump_ntc(txq, idx, tx_buf);
> > + idpf_post_buf_refill(txq->refillq, buf_id);
> > }
>
> Thanks,
> Olek
Thanks,
Josh
^ permalink raw reply [flat|nested] 20+ messages in thread
* RE: [Intel-wired-lan] [PATCH iwl-net v3 1/6] idpf: add support for Tx refillqs in flow scheduling mode
2025-07-25 18:42 ` [Intel-wired-lan] [PATCH iwl-net v3 1/6] idpf: add support for Tx refillqs in flow scheduling mode Joshua Hay
2025-07-28 17:06 ` Paul Menzel
@ 2025-08-12 16:45 ` Salin, Samuel
1 sibling, 0 replies; 20+ messages in thread
From: Salin, Samuel @ 2025-08-12 16:45 UTC (permalink / raw)
To: Hay, Joshua A, intel-wired-lan@lists.osuosl.org
Cc: netdev@vger.kernel.org, Hay, Joshua A, Chittim, Madhu
> -----Original Message-----
> From: Intel-wired-lan <intel-wired-lan-bounces@osuosl.org> On Behalf Of
> Joshua Hay
> Sent: Friday, July 25, 2025 11:42 AM
> To: intel-wired-lan@lists.osuosl.org
> Cc: netdev@vger.kernel.org; Hay, Joshua A <joshua.a.hay@intel.com>;
> Chittim, Madhu <madhu.chittim@intel.com>
> Subject: [Intel-wired-lan] [PATCH iwl-net v3 1/6] idpf: add support for Tx
> refillqs in flow scheduling mode
>
> In certain production environments, it is possible for completion tags to
> collide, meaning N packets with the same completion tag are in flight at the
> same time. In this environment, any given Tx queue is effectively used to send
> both slower traffic and higher throughput traffic simultaneously. This is the
> result of a customer's specific configuration in the device pipeline, the details
> of which Intel cannot provide. This configuration results in a small number of
> out-of-order completions, i.e., a small number of packets in flight. The existing
> guardrails in the driver only protect against a large number of packets in flight.
> The slower flow completions are delayed which causes the out-of-order
> completions. The fast flow will continue sending traffic and generating tags.
> Because tags are generated on the fly, the fast flow eventually uses the same
> tag for a packet that is still in flight from the slower flow. The driver has no idea
> which packet it should clean when it processes the completion with that tag,
> but it will look for the packet on the buffer ring before the hash table. If the
> slower flow packet completion is processed first, it will end up cleaning the fast
> flow packet on the ring prematurely. This leaves the descriptor ring in a bad
> state resulting in a crashes or Tx timeout.
>
> In summary, generating a tag when a packet is sent can lead to the same tag
> being associated with multiple packets. This can lead to resource leaks,
> crashes, and/or Tx timeouts.
>
> Before we can replace the tag generation, we need a new mechanism for the
> send path to know what tag to use next. The driver will allocate and initialize a
> refillq for each TxQ with all of the possible free tag values. During send, the
> driver grabs the next free tag from the refillq from next_to_clean. While
> cleaning the packet, the clean routine posts the tag back to the refillq's
> next_to_use to indicate that it is now free to use.
>
> This mechanism works exactly the same way as the existing Rx refill queues,
> which post the cleaned buffer IDs back to the buffer queue to be reposted to
> HW. Since we're using the refillqs for both Rx and Tx now, genercize some of
> the existing refillq support.
>
> Note: the refillqs will not be used yet. This is only demonstrating how they will
> be used to pass free tags back to the send path.
>
> Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
> Reviewed-by: Madhu Chittim <madhu.chittim@intel.com>
> ---
> v2:
> - reorder refillq init logic to reduce indentation
> - don't drop skb if get free bufid fails, increment busy counter
> - add missing unlikely
> ---
> 2.39.2
Tested-by: Samuel Salin <Samuel.salin@intel.com>
^ permalink raw reply [flat|nested] 20+ messages in thread
* RE: [Intel-wired-lan] [PATCH iwl-net v3 2/6] idpf: improve when to set RE bit logic
2025-07-25 18:42 ` [Intel-wired-lan] [PATCH iwl-net v3 2/6] idpf: improve when to set RE bit logic Joshua Hay
@ 2025-08-12 16:45 ` Salin, Samuel
0 siblings, 0 replies; 20+ messages in thread
From: Salin, Samuel @ 2025-08-12 16:45 UTC (permalink / raw)
To: Hay, Joshua A, intel-wired-lan@lists.osuosl.org
Cc: netdev@vger.kernel.org, Hay, Joshua A, Luigi Rizzo, Brian Vazquez,
Chittim, Madhu
> -----Original Message-----
> From: Intel-wired-lan <intel-wired-lan-bounces@osuosl.org> On Behalf Of
> Joshua Hay
> Sent: Friday, July 25, 2025 11:42 AM
> To: intel-wired-lan@lists.osuosl.org
> Cc: netdev@vger.kernel.org; Hay, Joshua A <joshua.a.hay@intel.com>; Luigi
> Rizzo <lrizzo@google.com>; Brian Vazquez <brianvv@google.com>; Chittim,
> Madhu <madhu.chittim@intel.com>
> Subject: [Intel-wired-lan] [PATCH iwl-net v3 2/6] idpf: improve when to set RE
> bit logic
>
> Track the gap between next_to_use and the last RE index. Set RE again if the
> gap is large enough to ensure RE bit is set frequently. This is critical before
> removing the stashing mechanisms because the opportunistic descriptor ring
> cleaning from the out-of-order completions will go away. Previously the
> descriptors would be "cleaned" by both the descriptor (RE) completion and
> the out-of-order completions. Without the latter, we must ensure the RE bit is
> set more frequently. Otherwise, it's theoretically possible for the descriptor
> ring next_to_clean to never advance. The previous implementation was
> dependent on the start of a packet falling on a 64th index in the descriptor
> ring, which is not guaranteed with large packets.
>
> Signed-off-by: Luigi Rizzo <lrizzo@google.com>
> Signed-off-by: Brian Vazquez <brianvv@google.com>
> Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
> Reviewed-by: Madhu Chittim <madhu.chittim@intel.com>
> ---
> 2.39.2
Tested-by: Samuel Salin <Samuel.salin@intel.com>
^ permalink raw reply [flat|nested] 20+ messages in thread
* RE: [Intel-wired-lan] [PATCH iwl-net v3 3/6] idpf: simplify and fix splitq Tx packet rollback error path
2025-07-25 18:42 ` [Intel-wired-lan] [PATCH iwl-net v3 3/6] idpf: simplify and fix splitq Tx packet rollback error path Joshua Hay
@ 2025-08-12 16:45 ` Salin, Samuel
0 siblings, 0 replies; 20+ messages in thread
From: Salin, Samuel @ 2025-08-12 16:45 UTC (permalink / raw)
To: Hay, Joshua A, intel-wired-lan@lists.osuosl.org
Cc: netdev@vger.kernel.org, Hay, Joshua A, Chittim, Madhu
> -----Original Message-----
> From: Intel-wired-lan <intel-wired-lan-bounces@osuosl.org> On Behalf Of
> Joshua Hay
> Sent: Friday, July 25, 2025 11:42 AM
> To: intel-wired-lan@lists.osuosl.org
> Cc: netdev@vger.kernel.org; Hay, Joshua A <joshua.a.hay@intel.com>;
> Chittim, Madhu <madhu.chittim@intel.com>
> Subject: [Intel-wired-lan] [PATCH iwl-net v3 3/6] idpf: simplify and fix splitq Tx
> packet rollback error path
>
> Move (and rename) the existing rollback logic to singleq.c since that will be the
> only consumer. Create a simplified splitq specific rollback function to loop
> through and unmap tx_bufs based on the completion tag.
> This is critical before replacing the Tx buffer ring with the buffer pool since the
> previous rollback indexing will not work to unmap the chained buffers from
> the pool.
>
> Cache the next_to_use index before any portion of the packet is put on the
> descriptor ring. In case of an error, the rollback will bump tail to the correct
> next_to_use value. Because the splitq path now supports different types of
> context descriptors (and potentially multiple in the future), this will take care
> of rolling back any and all context descriptors encoded on the ring for the
> erroneous packet. The previous rollback logic was broken for PTP packets since
> it would not account for the PTP context descriptor.
>
> Fixes: 1a49cf814fe1 ("idpf: add Tx timestamp flows")
> Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
> Reviewed-by: Madhu Chittim <madhu.chittim@intel.com>
> ---
> 2.39.2
Tested-by: Samuel Salin <Samuel.salin@intel.com>
^ permalink raw reply [flat|nested] 20+ messages in thread
* RE: [Intel-wired-lan] [PATCH iwl-net v3 4/6] idpf: replace flow scheduling buffer ring with buffer pool
2025-07-25 18:42 ` [Intel-wired-lan] [PATCH iwl-net v3 4/6] idpf: replace flow scheduling buffer ring with buffer pool Joshua Hay
2025-08-04 17:02 ` Alexander Lobakin
@ 2025-08-12 16:45 ` Salin, Samuel
1 sibling, 0 replies; 20+ messages in thread
From: Salin, Samuel @ 2025-08-12 16:45 UTC (permalink / raw)
To: Hay, Joshua A, intel-wired-lan@lists.osuosl.org
Cc: netdev@vger.kernel.org, Hay, Joshua A, Luigi Rizzo, Brian Vazquez,
Chittim, Madhu, Loktionov, Aleksandr
> -----Original Message-----
> From: Intel-wired-lan <intel-wired-lan-bounces@osuosl.org> On Behalf Of
> Joshua Hay
> Sent: Friday, July 25, 2025 11:42 AM
> To: intel-wired-lan@lists.osuosl.org
> Cc: netdev@vger.kernel.org; Hay, Joshua A <joshua.a.hay@intel.com>; Luigi
> Rizzo <lrizzo@google.com>; Brian Vazquez <brianvv@google.com>; Chittim,
> Madhu <madhu.chittim@intel.com>; Loktionov, Aleksandr
> <aleksandr.loktionov@intel.com>
> Subject: [Intel-wired-lan] [PATCH iwl-net v3 4/6] idpf: replace flow scheduling
> buffer ring with buffer pool
>
> Replace the TxQ buffer ring with one large pool/array of buffers (only for flow
> scheduling). This eliminates the tag generation and makes it impossible for a
> tag to be associated with more than one packet.
>
> The completion tag passed to HW through the descriptor is the index into the
> array. That same completion tag is posted back to the driver in the completion
> descriptor, and used to index into the array to quickly retrieve the buffer
> during cleaning. In this way, the tags are treated as a fix sized resource. If all
> tags are in use, no more packets can be sent on that particular queue (until
> some are freed up). The tag pool size is 64K since the completion tag width is
> 16 bits.
>
> For each packet, the driver pulls a free tag from the refillq to get the next free
> buffer index. When cleaning is complete, the tag is posted back to the refillq. A
> multi-frag packet spans multiple buffers in the driver, therefore it uses multiple
> buffer indexes/tags from the pool.
> Each frag pulls from the refillq to get the next free buffer index.
> These are tracked in a next_buf field that replaces the completion tag field in
> the buffer struct. This chains the buffers together so that the packet can be
> cleaned from the starting completion tag taken from the completion
> descriptor, then from the next_buf field for each subsequent buffer.
>
> In case of a dma_mapping_error occurs or the refillq runs out of free buf_ids,
> the packet will execute the rollback error path. This unmaps any buffers
> previously mapped for the packet. Since several free buf_ids could have
> already been pulled from the refillq, we need to restore its original state as
> well. Otherwise, the buf_ids/tags will be leaked and not used again until the
> queue is reallocated.
>
> Descriptor completions only advance the descriptor ring index to "clean"
> the descriptors. The packet completions only clean the buffers associated with
> the given packet completion tag and do not update the descriptor ring index.
>
> When operating in queue based scheduling mode, the array still acts as a ring
> and will only have TxQ descriptor count entries. The tx_bufs are still associated
> 1:1 with the descriptor ring entries and we can use the conventional indexing
> mechanisms.
>
> Fixes: c2d548cad150 ("idpf: add TX splitq napi poll support")
> Signed-off-by: Luigi Rizzo <lrizzo@google.com>
> Signed-off-by: Brian Vazquez <brianvv@google.com>
> Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
> Reviewed-by: Madhu Chittim <madhu.chittim@intel.com>
> Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com>
> ---
> v3:
> - remove unreachable code
>
> v2:
> - removed unused buf_size
> - miscellaneous cleanup based on changes to prior patches and addition
> of packet rollback changes patch
> - refactor packet rollback logic to iterate through chained bufs
> - add refillq state restore if rollback occurs
> ---
> 2.39.2
Tested-by: Samuel Salin <Samuel.salin@intel.com>
^ permalink raw reply [flat|nested] 20+ messages in thread
* RE: [Intel-wired-lan] [PATCH iwl-net v3 5/6] idpf: stop Tx if there are insufficient buffer resources
2025-07-25 18:42 ` [Intel-wired-lan] [PATCH iwl-net v3 5/6] idpf: stop Tx if there are insufficient buffer resources Joshua Hay
@ 2025-08-12 16:45 ` Salin, Samuel
0 siblings, 0 replies; 20+ messages in thread
From: Salin, Samuel @ 2025-08-12 16:45 UTC (permalink / raw)
To: Hay, Joshua A, intel-wired-lan@lists.osuosl.org
Cc: netdev@vger.kernel.org, Hay, Joshua A, Chittim, Madhu
> -----Original Message-----
> From: Intel-wired-lan <intel-wired-lan-bounces@osuosl.org> On Behalf Of
> Joshua Hay
> Sent: Friday, July 25, 2025 11:42 AM
> To: intel-wired-lan@lists.osuosl.org
> Cc: netdev@vger.kernel.org; Hay, Joshua A <joshua.a.hay@intel.com>;
> Chittim, Madhu <madhu.chittim@intel.com>
> Subject: [Intel-wired-lan] [PATCH iwl-net v3 5/6] idpf: stop Tx if there are
> insufficient buffer resources
>
> The Tx refillq logic will cause packets to be silently dropped if there are not
> enough buffer resources available to send a packet in flow scheduling mode.
> Instead, determine how many buffers are needed along with number of
> descriptors. Make sure there are enough of both resources to send the packet,
> and stop the queue if not.
>
> Fixes: 7292af042bcf ("idpf: fix a race in txq wakeup")
> Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
> Reviewed-by: Madhu Chittim <madhu.chittim@intel.com>
> ---
> v2:
> - Init buf_count to 1 and += nr_frags to account for header
> - s/unsigned int/u32 where appropriate
> - replaced BUFS_UNUSED macro with static inline func
> ---
> 2.39.2
Tested-by: Samuel Salin <Samuel.salin@intel.com>
^ permalink raw reply [flat|nested] 20+ messages in thread
* RE: [Intel-wired-lan] [PATCH iwl-net v3 6/6] idpf: remove obsolete stashing code
2025-07-25 18:42 ` [Intel-wired-lan] [PATCH iwl-net v3 6/6] idpf: remove obsolete stashing code Joshua Hay
2025-07-28 9:04 ` Loktionov, Aleksandr
@ 2025-08-12 16:45 ` Salin, Samuel
1 sibling, 0 replies; 20+ messages in thread
From: Salin, Samuel @ 2025-08-12 16:45 UTC (permalink / raw)
To: Hay, Joshua A, intel-wired-lan@lists.osuosl.org
Cc: netdev@vger.kernel.org, Hay, Joshua A, Chittim, Madhu
> -----Original Message-----
> From: Intel-wired-lan <intel-wired-lan-bounces@osuosl.org> On Behalf Of
> Joshua Hay
> Sent: Friday, July 25, 2025 11:42 AM
> To: intel-wired-lan@lists.osuosl.org
> Cc: netdev@vger.kernel.org; Hay, Joshua A <joshua.a.hay@intel.com>;
> Chittim, Madhu <madhu.chittim@intel.com>
> Subject: [Intel-wired-lan] [PATCH iwl-net v3 6/6] idpf: remove obsolete
> stashing code
>
> With the new Tx buffer management scheme, there is no need for all of the
> stashing mechanisms, the hash table, the reserve buffer stack, etc.
> Remove all of that.
>
> Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
> Reviewed-by: Madhu Chittim <madhu.chittim@intel.com>
> ---
> v3: update comment format
> ---
> 2.39.2
Tested-by: Samuel Salin <Samuel.salin@intel.com>
^ permalink raw reply [flat|nested] 20+ messages in thread
end of thread, other threads:[~2025-08-12 16:45 UTC | newest]
Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-25 18:42 [Intel-wired-lan] [PATCH iwl-net v3 0/6] idpf: replace Tx flow scheduling buffer ring with buffer pool Joshua Hay
2025-07-25 18:42 ` [Intel-wired-lan] [PATCH iwl-net v3 1/6] idpf: add support for Tx refillqs in flow scheduling mode Joshua Hay
2025-07-28 17:06 ` Paul Menzel
2025-07-29 17:15 ` Hay, Joshua A
2025-07-29 22:53 ` Paul Menzel
2025-07-31 21:08 ` Hay, Joshua A
2025-08-12 16:45 ` Salin, Samuel
2025-07-25 18:42 ` [Intel-wired-lan] [PATCH iwl-net v3 2/6] idpf: improve when to set RE bit logic Joshua Hay
2025-08-12 16:45 ` Salin, Samuel
2025-07-25 18:42 ` [Intel-wired-lan] [PATCH iwl-net v3 3/6] idpf: simplify and fix splitq Tx packet rollback error path Joshua Hay
2025-08-12 16:45 ` Salin, Samuel
2025-07-25 18:42 ` [Intel-wired-lan] [PATCH iwl-net v3 4/6] idpf: replace flow scheduling buffer ring with buffer pool Joshua Hay
2025-08-04 17:02 ` Alexander Lobakin
2025-08-05 22:40 ` Hay, Joshua A
2025-08-12 16:45 ` Salin, Samuel
2025-07-25 18:42 ` [Intel-wired-lan] [PATCH iwl-net v3 5/6] idpf: stop Tx if there are insufficient buffer resources Joshua Hay
2025-08-12 16:45 ` Salin, Samuel
2025-07-25 18:42 ` [Intel-wired-lan] [PATCH iwl-net v3 6/6] idpf: remove obsolete stashing code Joshua Hay
2025-07-28 9:04 ` Loktionov, Aleksandr
2025-08-12 16:45 ` Salin, Samuel
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).