netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH iwl-net v2 0/3] ixgbe: xsk: a couple of changes for zerocopy
@ 2025-08-12  7:55 Jason Xing
  2025-08-12  7:55 ` [PATCH iwl-net v2 1/3] ixgbe: xsk: remove budget from ixgbe_clean_xdp_tx_irq Jason Xing
                   ` (3 more replies)
  0 siblings, 4 replies; 13+ messages in thread
From: Jason Xing @ 2025-08-12  7:55 UTC (permalink / raw)
  To: davem, edumazet, kuba, pabeni, horms, andrew+netdev,
	anthony.l.nguyen, przemyslaw.kitszel, sdf, larysa.zaremba,
	maciej.fijalkowski
  Cc: intel-wired-lan, netdev, Jason Xing

From: Jason Xing <kernelxing@tencent.com>

The series mostly follows the development of i40e/ice to improve the
performance for zerocopy mode in the tx path.

---
V2
Link: https://lore.kernel.org/intel-wired-lan/20250720091123.474-1-kerneljasonxing@gmail.com/
1. remove previous 2nd and last patch.

Jason Xing (3):
  ixgbe: xsk: remove budget from ixgbe_clean_xdp_tx_irq
  ixgbe: xsk: use ixgbe_desc_unused as the budget in ixgbe_xmit_zc
  ixgbe: xsk: support batched xsk Tx interfaces to increase performance

 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c |   2 +-
 .../ethernet/intel/ixgbe/ixgbe_txrx_common.h  |   2 +-
 drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c  | 113 ++++++++++++------
 3 files changed, 76 insertions(+), 41 deletions(-)

-- 
2.41.3


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH iwl-net v2 1/3] ixgbe: xsk: remove budget from ixgbe_clean_xdp_tx_irq
  2025-08-12  7:55 [PATCH iwl-net v2 0/3] ixgbe: xsk: a couple of changes for zerocopy Jason Xing
@ 2025-08-12  7:55 ` Jason Xing
  2025-08-12 15:42   ` Maciej Fijalkowski
  2025-08-12  7:55 ` [PATCH iwl-net v2 2/3] ixgbe: xsk: use ixgbe_desc_unused as the budget in ixgbe_xmit_zc Jason Xing
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 13+ messages in thread
From: Jason Xing @ 2025-08-12  7:55 UTC (permalink / raw)
  To: davem, edumazet, kuba, pabeni, horms, andrew+netdev,
	anthony.l.nguyen, przemyslaw.kitszel, sdf, larysa.zaremba,
	maciej.fijalkowski
  Cc: intel-wired-lan, netdev, Jason Xing

From: Jason Xing <kernelxing@tencent.com>

Since 'budget' parameter in ixgbe_clean_xdp_tx_irq() takes no effect,
the patch removes it. No functional change here.

Reviewed-by: Larysa Zaremba <larysa.zaremba@intel.com>
Signed-off-by: Jason Xing <kernelxing@tencent.com>
---
 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c        | 2 +-
 drivers/net/ethernet/intel/ixgbe/ixgbe_txrx_common.h | 2 +-
 drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c         | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index 9a6a67a6d644..7a9508e1c05a 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -3585,7 +3585,7 @@ int ixgbe_poll(struct napi_struct *napi, int budget)
 
 	ixgbe_for_each_ring(ring, q_vector->tx) {
 		bool wd = ring->xsk_pool ?
-			  ixgbe_clean_xdp_tx_irq(q_vector, ring, budget) :
+			  ixgbe_clean_xdp_tx_irq(q_vector, ring) :
 			  ixgbe_clean_tx_irq(q_vector, ring, budget);
 
 		if (!wd)
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_txrx_common.h b/drivers/net/ethernet/intel/ixgbe/ixgbe_txrx_common.h
index 78deea5ec536..788722fe527a 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_txrx_common.h
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_txrx_common.h
@@ -42,7 +42,7 @@ int ixgbe_clean_rx_irq_zc(struct ixgbe_q_vector *q_vector,
 			  const int budget);
 void ixgbe_xsk_clean_rx_ring(struct ixgbe_ring *rx_ring);
 bool ixgbe_clean_xdp_tx_irq(struct ixgbe_q_vector *q_vector,
-			    struct ixgbe_ring *tx_ring, int napi_budget);
+			    struct ixgbe_ring *tx_ring);
 int ixgbe_xsk_wakeup(struct net_device *dev, u32 queue_id, u32 flags);
 void ixgbe_xsk_clean_tx_ring(struct ixgbe_ring *tx_ring);
 
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
index 7b941505a9d0..a463c5ac9c7c 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
@@ -456,7 +456,7 @@ static void ixgbe_clean_xdp_tx_buffer(struct ixgbe_ring *tx_ring,
 }
 
 bool ixgbe_clean_xdp_tx_irq(struct ixgbe_q_vector *q_vector,
-			    struct ixgbe_ring *tx_ring, int napi_budget)
+			    struct ixgbe_ring *tx_ring)
 {
 	u16 ntc = tx_ring->next_to_clean, ntu = tx_ring->next_to_use;
 	unsigned int total_packets = 0, total_bytes = 0;
-- 
2.41.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH iwl-net v2 2/3] ixgbe: xsk: use ixgbe_desc_unused as the budget in ixgbe_xmit_zc
  2025-08-12  7:55 [PATCH iwl-net v2 0/3] ixgbe: xsk: a couple of changes for zerocopy Jason Xing
  2025-08-12  7:55 ` [PATCH iwl-net v2 1/3] ixgbe: xsk: remove budget from ixgbe_clean_xdp_tx_irq Jason Xing
@ 2025-08-12  7:55 ` Jason Xing
  2025-08-13 11:14   ` [Intel-wired-lan] " Loktionov, Aleksandr
  2025-08-12  7:55 ` [PATCH iwl-net v2 3/3] ixgbe: xsk: support batched xsk Tx interfaces to increase performance Jason Xing
  2025-08-12 20:44 ` [PATCH iwl-net v2 0/3] ixgbe: xsk: a couple of changes for zerocopy Tony Nguyen
  3 siblings, 1 reply; 13+ messages in thread
From: Jason Xing @ 2025-08-12  7:55 UTC (permalink / raw)
  To: davem, edumazet, kuba, pabeni, horms, andrew+netdev,
	anthony.l.nguyen, przemyslaw.kitszel, sdf, larysa.zaremba,
	maciej.fijalkowski
  Cc: intel-wired-lan, netdev, Jason Xing

From: Jason Xing <kernelxing@tencent.com>

- Adjust ixgbe_desc_unused as the budget value.
- Avoid checking desc_unused over and over again in the loop.

The patch makes ixgbe follow i40e driver that was done in commit
1fd972ebe523 ("i40e: move check of full Tx ring to outside of send loop").
[ Note that the above i40e patch has problem when ixgbe_desc_unused(tx_ring)
returns zero. The zero value as the budget value means we don't have any
possible descs to be sent, so it should return true instead to tell the
napi poll not to launch another poll to handle tx packets. Even though
that patch behaves correctly by returning true in this case, it happens
because of the unexpected underflow of the budget. Taking the current
version of i40e_xmit_zc() as an example, it returns true as expected. ]
Hence, this patch adds a standalone if statement of zero budget in front
of ixgbe_xmit_zc() as explained before.

Use ixgbe_desc_unused to replace the original fixed budget with the number
of available slots in the Tx ring. It can gain some performance.

Signed-off-by: Jason Xing <kernelxing@tencent.com>
---
In this version, I keep it as is (please see the following link)
https://lore.kernel.org/intel-wired-lan/CAL+tcoAUW_J62aw3aGBru+0GmaTjoom1qu8Y=aiSc9EGU09Nww@mail.gmail.com/
---
 drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c | 13 +++++--------
 1 file changed, 5 insertions(+), 8 deletions(-)

diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
index a463c5ac9c7c..f3d3f5c1cdc7 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
@@ -393,17 +393,14 @@ static bool ixgbe_xmit_zc(struct ixgbe_ring *xdp_ring, unsigned int budget)
 	struct xsk_buff_pool *pool = xdp_ring->xsk_pool;
 	union ixgbe_adv_tx_desc *tx_desc = NULL;
 	struct ixgbe_tx_buffer *tx_bi;
-	bool work_done = true;
 	struct xdp_desc desc;
 	dma_addr_t dma;
 	u32 cmd_type;
 
-	while (likely(budget)) {
-		if (unlikely(!ixgbe_desc_unused(xdp_ring))) {
-			work_done = false;
-			break;
-		}
+	if (!budget)
+		return true;
 
+	while (likely(budget)) {
 		if (!netif_carrier_ok(xdp_ring->netdev))
 			break;
 
@@ -442,7 +439,7 @@ static bool ixgbe_xmit_zc(struct ixgbe_ring *xdp_ring, unsigned int budget)
 		xsk_tx_release(pool);
 	}
 
-	return !!budget && work_done;
+	return !!budget;
 }
 
 static void ixgbe_clean_xdp_tx_buffer(struct ixgbe_ring *tx_ring,
@@ -505,7 +502,7 @@ bool ixgbe_clean_xdp_tx_irq(struct ixgbe_q_vector *q_vector,
 	if (xsk_uses_need_wakeup(pool))
 		xsk_set_tx_need_wakeup(pool);
 
-	return ixgbe_xmit_zc(tx_ring, q_vector->tx.work_limit);
+	return ixgbe_xmit_zc(tx_ring, ixgbe_desc_unused(tx_ring));
 }
 
 int ixgbe_xsk_wakeup(struct net_device *dev, u32 qid, u32 flags)
-- 
2.41.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH iwl-net v2 3/3] ixgbe: xsk: support batched xsk Tx interfaces to increase performance
  2025-08-12  7:55 [PATCH iwl-net v2 0/3] ixgbe: xsk: a couple of changes for zerocopy Jason Xing
  2025-08-12  7:55 ` [PATCH iwl-net v2 1/3] ixgbe: xsk: remove budget from ixgbe_clean_xdp_tx_irq Jason Xing
  2025-08-12  7:55 ` [PATCH iwl-net v2 2/3] ixgbe: xsk: use ixgbe_desc_unused as the budget in ixgbe_xmit_zc Jason Xing
@ 2025-08-12  7:55 ` Jason Xing
  2025-08-12 15:42   ` Maciej Fijalkowski
  2025-08-12 20:44 ` [PATCH iwl-net v2 0/3] ixgbe: xsk: a couple of changes for zerocopy Tony Nguyen
  3 siblings, 1 reply; 13+ messages in thread
From: Jason Xing @ 2025-08-12  7:55 UTC (permalink / raw)
  To: davem, edumazet, kuba, pabeni, horms, andrew+netdev,
	anthony.l.nguyen, przemyslaw.kitszel, sdf, larysa.zaremba,
	maciej.fijalkowski
  Cc: intel-wired-lan, netdev, Jason Xing

From: Jason Xing <kernelxing@tencent.com>

Like what i40e driver initially did in commit 3106c580fb7cf
("i40e: Use batched xsk Tx interfaces to increase performance"), use
the batched xsk feature to transmit packets.

Signed-off-by: Jason Xing <kernelxing@tencent.com>
---
In this version, I still choose use the current implementation. Last
time at the first glance, I agreed 'i' is useless but it is not.
https://lore.kernel.org/intel-wired-lan/CAL+tcoADu-ZZewsZzGDaL7NugxFTWO_Q+7WsLHs3Mx-XHjJnyg@mail.gmail.com/
---
 drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c | 106 +++++++++++++------
 1 file changed, 72 insertions(+), 34 deletions(-)

diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
index f3d3f5c1cdc7..9fe2c4bf8bc5 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
@@ -2,12 +2,15 @@
 /* Copyright(c) 2018 Intel Corporation. */
 
 #include <linux/bpf_trace.h>
+#include <linux/unroll.h>
 #include <net/xdp_sock_drv.h>
 #include <net/xdp.h>
 
 #include "ixgbe.h"
 #include "ixgbe_txrx_common.h"
 
+#define PKTS_PER_BATCH 4
+
 struct xsk_buff_pool *ixgbe_xsk_pool(struct ixgbe_adapter *adapter,
 				     struct ixgbe_ring *ring)
 {
@@ -388,58 +391,93 @@ void ixgbe_xsk_clean_rx_ring(struct ixgbe_ring *rx_ring)
 	}
 }
 
-static bool ixgbe_xmit_zc(struct ixgbe_ring *xdp_ring, unsigned int budget)
+static void ixgbe_set_rs_bit(struct ixgbe_ring *xdp_ring)
+{
+	u16 ntu = xdp_ring->next_to_use ? xdp_ring->next_to_use - 1 : xdp_ring->count - 1;
+	union ixgbe_adv_tx_desc *tx_desc;
+
+	tx_desc = IXGBE_TX_DESC(xdp_ring, ntu);
+	tx_desc->read.cmd_type_len |= cpu_to_le32(IXGBE_TXD_CMD_RS);
+}
+
+static void ixgbe_xmit_pkt(struct ixgbe_ring *xdp_ring, struct xdp_desc *desc,
+			   int i)
+
 {
 	struct xsk_buff_pool *pool = xdp_ring->xsk_pool;
 	union ixgbe_adv_tx_desc *tx_desc = NULL;
 	struct ixgbe_tx_buffer *tx_bi;
-	struct xdp_desc desc;
 	dma_addr_t dma;
 	u32 cmd_type;
 
-	if (!budget)
-		return true;
+	dma = xsk_buff_raw_get_dma(pool, desc[i].addr);
+	xsk_buff_raw_dma_sync_for_device(pool, dma, desc[i].len);
 
-	while (likely(budget)) {
-		if (!netif_carrier_ok(xdp_ring->netdev))
-			break;
+	tx_bi = &xdp_ring->tx_buffer_info[xdp_ring->next_to_use];
+	tx_bi->bytecount = desc[i].len;
+	tx_bi->xdpf = NULL;
+	tx_bi->gso_segs = 1;
 
-		if (!xsk_tx_peek_desc(pool, &desc))
-			break;
+	tx_desc = IXGBE_TX_DESC(xdp_ring, xdp_ring->next_to_use);
+	tx_desc->read.buffer_addr = cpu_to_le64(dma);
 
-		dma = xsk_buff_raw_get_dma(pool, desc.addr);
-		xsk_buff_raw_dma_sync_for_device(pool, dma, desc.len);
+	cmd_type = IXGBE_ADVTXD_DTYP_DATA |
+		   IXGBE_ADVTXD_DCMD_DEXT |
+		   IXGBE_ADVTXD_DCMD_IFCS;
+	cmd_type |= desc[i].len | IXGBE_TXD_CMD_EOP;
+	tx_desc->read.cmd_type_len = cpu_to_le32(cmd_type);
+	tx_desc->read.olinfo_status =
+		cpu_to_le32(desc[i].len << IXGBE_ADVTXD_PAYLEN_SHIFT);
 
-		tx_bi = &xdp_ring->tx_buffer_info[xdp_ring->next_to_use];
-		tx_bi->bytecount = desc.len;
-		tx_bi->xdpf = NULL;
-		tx_bi->gso_segs = 1;
+	xdp_ring->next_to_use++;
+}
 
-		tx_desc = IXGBE_TX_DESC(xdp_ring, xdp_ring->next_to_use);
-		tx_desc->read.buffer_addr = cpu_to_le64(dma);
+static void ixgbe_xmit_pkt_batch(struct ixgbe_ring *xdp_ring, struct xdp_desc *desc)
+{
+	u32 i;
 
-		/* put descriptor type bits */
-		cmd_type = IXGBE_ADVTXD_DTYP_DATA |
-			   IXGBE_ADVTXD_DCMD_DEXT |
-			   IXGBE_ADVTXD_DCMD_IFCS;
-		cmd_type |= desc.len | IXGBE_TXD_CMD;
-		tx_desc->read.cmd_type_len = cpu_to_le32(cmd_type);
-		tx_desc->read.olinfo_status =
-			cpu_to_le32(desc.len << IXGBE_ADVTXD_PAYLEN_SHIFT);
+	unrolled_count(PKTS_PER_BATCH)
+	for (i = 0; i < PKTS_PER_BATCH; i++)
+		ixgbe_xmit_pkt(xdp_ring, desc, i);
+}
 
-		xdp_ring->next_to_use++;
-		if (xdp_ring->next_to_use == xdp_ring->count)
-			xdp_ring->next_to_use = 0;
+static void ixgbe_fill_tx_hw_ring(struct ixgbe_ring *xdp_ring,
+				  struct xdp_desc *descs, u32 nb_pkts)
+{
+	u32 batched, leftover, i;
+
+	batched = nb_pkts & ~(PKTS_PER_BATCH - 1);
+	leftover = nb_pkts & (PKTS_PER_BATCH - 1);
+	for (i = 0; i < batched; i += PKTS_PER_BATCH)
+		ixgbe_xmit_pkt_batch(xdp_ring, &descs[i]);
+	for (i = batched; i < batched + leftover; i++)
+		ixgbe_xmit_pkt(xdp_ring, &descs[i], 0);
+}
 
-		budget--;
-	}
+static bool ixgbe_xmit_zc(struct ixgbe_ring *xdp_ring, unsigned int budget)
+{
+	struct xdp_desc *descs = xdp_ring->xsk_pool->tx_descs;
+	u32 nb_pkts, nb_processed = 0;
 
-	if (tx_desc) {
-		ixgbe_xdp_ring_update_tail(xdp_ring);
-		xsk_tx_release(pool);
+	if (!netif_carrier_ok(xdp_ring->netdev))
+		return true;
+
+	nb_pkts = xsk_tx_peek_release_desc_batch(xdp_ring->xsk_pool, budget);
+	if (!nb_pkts)
+		return true;
+
+	if (xdp_ring->next_to_use + nb_pkts >= xdp_ring->count) {
+		nb_processed = xdp_ring->count - xdp_ring->next_to_use;
+		ixgbe_fill_tx_hw_ring(xdp_ring, descs, nb_processed);
+		xdp_ring->next_to_use = 0;
 	}
 
-	return !!budget;
+	ixgbe_fill_tx_hw_ring(xdp_ring, &descs[nb_processed], nb_pkts - nb_processed);
+
+	ixgbe_set_rs_bit(xdp_ring);
+	ixgbe_xdp_ring_update_tail(xdp_ring);
+
+	return nb_pkts < budget;
 }
 
 static void ixgbe_clean_xdp_tx_buffer(struct ixgbe_ring *tx_ring,
-- 
2.41.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH iwl-net v2 3/3] ixgbe: xsk: support batched xsk Tx interfaces to increase performance
  2025-08-12  7:55 ` [PATCH iwl-net v2 3/3] ixgbe: xsk: support batched xsk Tx interfaces to increase performance Jason Xing
@ 2025-08-12 15:42   ` Maciej Fijalkowski
  2025-08-13  0:34     ` Jason Xing
  0 siblings, 1 reply; 13+ messages in thread
From: Maciej Fijalkowski @ 2025-08-12 15:42 UTC (permalink / raw)
  To: Jason Xing
  Cc: davem, edumazet, kuba, pabeni, horms, andrew+netdev,
	anthony.l.nguyen, przemyslaw.kitszel, sdf, larysa.zaremba,
	intel-wired-lan, netdev, Jason Xing

On Tue, Aug 12, 2025 at 03:55:04PM +0800, Jason Xing wrote:
> From: Jason Xing <kernelxing@tencent.com>
> 

Hi Jason,

patches should be targetted at iwl-next as these are improvements, not
fixes.

> Like what i40e driver initially did in commit 3106c580fb7cf
> ("i40e: Use batched xsk Tx interfaces to increase performance"), use
> the batched xsk feature to transmit packets.
> 
> Signed-off-by: Jason Xing <kernelxing@tencent.com>
> ---
> In this version, I still choose use the current implementation. Last
> time at the first glance, I agreed 'i' is useless but it is not.
> https://lore.kernel.org/intel-wired-lan/CAL+tcoADu-ZZewsZzGDaL7NugxFTWO_Q+7WsLHs3Mx-XHjJnyg@mail.gmail.com/

dare to share the performance improvement (if any, in the current form)?

also you have not mentioned in v1->v2 that you dropped the setting of
xdp_zc_max_segs, which is a step in a correct path.

> ---
>  drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c | 106 +++++++++++++------
>  1 file changed, 72 insertions(+), 34 deletions(-)
> 
> diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
> index f3d3f5c1cdc7..9fe2c4bf8bc5 100644
> --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
> +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
> @@ -2,12 +2,15 @@
>  /* Copyright(c) 2018 Intel Corporation. */
>  
>  #include <linux/bpf_trace.h>
> +#include <linux/unroll.h>
>  #include <net/xdp_sock_drv.h>
>  #include <net/xdp.h>
>  
>  #include "ixgbe.h"
>  #include "ixgbe_txrx_common.h"
>  
> +#define PKTS_PER_BATCH 4
> +
>  struct xsk_buff_pool *ixgbe_xsk_pool(struct ixgbe_adapter *adapter,
>  				     struct ixgbe_ring *ring)
>  {
> @@ -388,58 +391,93 @@ void ixgbe_xsk_clean_rx_ring(struct ixgbe_ring *rx_ring)
>  	}
>  }
>  
> -static bool ixgbe_xmit_zc(struct ixgbe_ring *xdp_ring, unsigned int budget)
> +static void ixgbe_set_rs_bit(struct ixgbe_ring *xdp_ring)
> +{
> +	u16 ntu = xdp_ring->next_to_use ? xdp_ring->next_to_use - 1 : xdp_ring->count - 1;
> +	union ixgbe_adv_tx_desc *tx_desc;
> +
> +	tx_desc = IXGBE_TX_DESC(xdp_ring, ntu);
> +	tx_desc->read.cmd_type_len |= cpu_to_le32(IXGBE_TXD_CMD_RS);

you have not addressed the descriptor cleaning path which makes this
change rather pointless or even the driver behavior is broken.

point of such change is to limit the interrupts raised by HW once it is
done with sending the descriptor. you still walk the descs one-by-one in
ixgbe_clean_xdp_tx_irq().

> +}
> +
> +static void ixgbe_xmit_pkt(struct ixgbe_ring *xdp_ring, struct xdp_desc *desc,
> +			   int i)
> +
>  {
>  	struct xsk_buff_pool *pool = xdp_ring->xsk_pool;
>  	union ixgbe_adv_tx_desc *tx_desc = NULL;
>  	struct ixgbe_tx_buffer *tx_bi;
> -	struct xdp_desc desc;
>  	dma_addr_t dma;
>  	u32 cmd_type;
>  
> -	if (!budget)
> -		return true;
> +	dma = xsk_buff_raw_get_dma(pool, desc[i].addr);
> +	xsk_buff_raw_dma_sync_for_device(pool, dma, desc[i].len);
>  
> -	while (likely(budget)) {
> -		if (!netif_carrier_ok(xdp_ring->netdev))
> -			break;
> +	tx_bi = &xdp_ring->tx_buffer_info[xdp_ring->next_to_use];
> +	tx_bi->bytecount = desc[i].len;
> +	tx_bi->xdpf = NULL;
> +	tx_bi->gso_segs = 1;
>  
> -		if (!xsk_tx_peek_desc(pool, &desc))
> -			break;
> +	tx_desc = IXGBE_TX_DESC(xdp_ring, xdp_ring->next_to_use);
> +	tx_desc->read.buffer_addr = cpu_to_le64(dma);
>  
> -		dma = xsk_buff_raw_get_dma(pool, desc.addr);
> -		xsk_buff_raw_dma_sync_for_device(pool, dma, desc.len);
> +	cmd_type = IXGBE_ADVTXD_DTYP_DATA |
> +		   IXGBE_ADVTXD_DCMD_DEXT |
> +		   IXGBE_ADVTXD_DCMD_IFCS;
> +	cmd_type |= desc[i].len | IXGBE_TXD_CMD_EOP;
> +	tx_desc->read.cmd_type_len = cpu_to_le32(cmd_type);
> +	tx_desc->read.olinfo_status =
> +		cpu_to_le32(desc[i].len << IXGBE_ADVTXD_PAYLEN_SHIFT);
>  
> -		tx_bi = &xdp_ring->tx_buffer_info[xdp_ring->next_to_use];
> -		tx_bi->bytecount = desc.len;
> -		tx_bi->xdpf = NULL;
> -		tx_bi->gso_segs = 1;
> +	xdp_ring->next_to_use++;
> +}
>  
> -		tx_desc = IXGBE_TX_DESC(xdp_ring, xdp_ring->next_to_use);
> -		tx_desc->read.buffer_addr = cpu_to_le64(dma);
> +static void ixgbe_xmit_pkt_batch(struct ixgbe_ring *xdp_ring, struct xdp_desc *desc)
> +{
> +	u32 i;
>  
> -		/* put descriptor type bits */
> -		cmd_type = IXGBE_ADVTXD_DTYP_DATA |
> -			   IXGBE_ADVTXD_DCMD_DEXT |
> -			   IXGBE_ADVTXD_DCMD_IFCS;
> -		cmd_type |= desc.len | IXGBE_TXD_CMD;
> -		tx_desc->read.cmd_type_len = cpu_to_le32(cmd_type);
> -		tx_desc->read.olinfo_status =
> -			cpu_to_le32(desc.len << IXGBE_ADVTXD_PAYLEN_SHIFT);
> +	unrolled_count(PKTS_PER_BATCH)
> +	for (i = 0; i < PKTS_PER_BATCH; i++)
> +		ixgbe_xmit_pkt(xdp_ring, desc, i);
> +}
>  
> -		xdp_ring->next_to_use++;
> -		if (xdp_ring->next_to_use == xdp_ring->count)
> -			xdp_ring->next_to_use = 0;
> +static void ixgbe_fill_tx_hw_ring(struct ixgbe_ring *xdp_ring,
> +				  struct xdp_desc *descs, u32 nb_pkts)
> +{
> +	u32 batched, leftover, i;
> +
> +	batched = nb_pkts & ~(PKTS_PER_BATCH - 1);
> +	leftover = nb_pkts & (PKTS_PER_BATCH - 1);
> +	for (i = 0; i < batched; i += PKTS_PER_BATCH)
> +		ixgbe_xmit_pkt_batch(xdp_ring, &descs[i]);
> +	for (i = batched; i < batched + leftover; i++)
> +		ixgbe_xmit_pkt(xdp_ring, &descs[i], 0);
> +}
>  
> -		budget--;
> -	}
> +static bool ixgbe_xmit_zc(struct ixgbe_ring *xdp_ring, unsigned int budget)
> +{
> +	struct xdp_desc *descs = xdp_ring->xsk_pool->tx_descs;
> +	u32 nb_pkts, nb_processed = 0;
>  
> -	if (tx_desc) {
> -		ixgbe_xdp_ring_update_tail(xdp_ring);
> -		xsk_tx_release(pool);
> +	if (!netif_carrier_ok(xdp_ring->netdev))
> +		return true;
> +
> +	nb_pkts = xsk_tx_peek_release_desc_batch(xdp_ring->xsk_pool, budget);
> +	if (!nb_pkts)
> +		return true;
> +
> +	if (xdp_ring->next_to_use + nb_pkts >= xdp_ring->count) {
> +		nb_processed = xdp_ring->count - xdp_ring->next_to_use;
> +		ixgbe_fill_tx_hw_ring(xdp_ring, descs, nb_processed);
> +		xdp_ring->next_to_use = 0;
>  	}
>  
> -	return !!budget;
> +	ixgbe_fill_tx_hw_ring(xdp_ring, &descs[nb_processed], nb_pkts - nb_processed);
> +
> +	ixgbe_set_rs_bit(xdp_ring);
> +	ixgbe_xdp_ring_update_tail(xdp_ring);
> +
> +	return nb_pkts < budget;
>  }
>  
>  static void ixgbe_clean_xdp_tx_buffer(struct ixgbe_ring *tx_ring,
> -- 
> 2.41.3
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH iwl-net v2 1/3] ixgbe: xsk: remove budget from ixgbe_clean_xdp_tx_irq
  2025-08-12  7:55 ` [PATCH iwl-net v2 1/3] ixgbe: xsk: remove budget from ixgbe_clean_xdp_tx_irq Jason Xing
@ 2025-08-12 15:42   ` Maciej Fijalkowski
  0 siblings, 0 replies; 13+ messages in thread
From: Maciej Fijalkowski @ 2025-08-12 15:42 UTC (permalink / raw)
  To: Jason Xing
  Cc: davem, edumazet, kuba, pabeni, horms, andrew+netdev,
	anthony.l.nguyen, przemyslaw.kitszel, sdf, larysa.zaremba,
	intel-wired-lan, netdev, Jason Xing

On Tue, Aug 12, 2025 at 03:55:02PM +0800, Jason Xing wrote:
> From: Jason Xing <kernelxing@tencent.com>
> 
> Since 'budget' parameter in ixgbe_clean_xdp_tx_irq() takes no effect,
> the patch removes it. No functional change here.
> 
> Reviewed-by: Larysa Zaremba <larysa.zaremba@intel.com>
> Signed-off-by: Jason Xing <kernelxing@tencent.com>

Acked-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>

> ---
>  drivers/net/ethernet/intel/ixgbe/ixgbe_main.c        | 2 +-
>  drivers/net/ethernet/intel/ixgbe/ixgbe_txrx_common.h | 2 +-
>  drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c         | 2 +-
>  3 files changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
> index 9a6a67a6d644..7a9508e1c05a 100644
> --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
> +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
> @@ -3585,7 +3585,7 @@ int ixgbe_poll(struct napi_struct *napi, int budget)
>  
>  	ixgbe_for_each_ring(ring, q_vector->tx) {
>  		bool wd = ring->xsk_pool ?
> -			  ixgbe_clean_xdp_tx_irq(q_vector, ring, budget) :
> +			  ixgbe_clean_xdp_tx_irq(q_vector, ring) :
>  			  ixgbe_clean_tx_irq(q_vector, ring, budget);
>  
>  		if (!wd)
> diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_txrx_common.h b/drivers/net/ethernet/intel/ixgbe/ixgbe_txrx_common.h
> index 78deea5ec536..788722fe527a 100644
> --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_txrx_common.h
> +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_txrx_common.h
> @@ -42,7 +42,7 @@ int ixgbe_clean_rx_irq_zc(struct ixgbe_q_vector *q_vector,
>  			  const int budget);
>  void ixgbe_xsk_clean_rx_ring(struct ixgbe_ring *rx_ring);
>  bool ixgbe_clean_xdp_tx_irq(struct ixgbe_q_vector *q_vector,
> -			    struct ixgbe_ring *tx_ring, int napi_budget);
> +			    struct ixgbe_ring *tx_ring);
>  int ixgbe_xsk_wakeup(struct net_device *dev, u32 queue_id, u32 flags);
>  void ixgbe_xsk_clean_tx_ring(struct ixgbe_ring *tx_ring);
>  
> diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
> index 7b941505a9d0..a463c5ac9c7c 100644
> --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
> +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
> @@ -456,7 +456,7 @@ static void ixgbe_clean_xdp_tx_buffer(struct ixgbe_ring *tx_ring,
>  }
>  
>  bool ixgbe_clean_xdp_tx_irq(struct ixgbe_q_vector *q_vector,
> -			    struct ixgbe_ring *tx_ring, int napi_budget)
> +			    struct ixgbe_ring *tx_ring)
>  {
>  	u16 ntc = tx_ring->next_to_clean, ntu = tx_ring->next_to_use;
>  	unsigned int total_packets = 0, total_bytes = 0;
> -- 
> 2.41.3
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH iwl-net v2 0/3] ixgbe: xsk: a couple of changes for zerocopy
  2025-08-12  7:55 [PATCH iwl-net v2 0/3] ixgbe: xsk: a couple of changes for zerocopy Jason Xing
                   ` (2 preceding siblings ...)
  2025-08-12  7:55 ` [PATCH iwl-net v2 3/3] ixgbe: xsk: support batched xsk Tx interfaces to increase performance Jason Xing
@ 2025-08-12 20:44 ` Tony Nguyen
  2025-08-12 23:38   ` Jason Xing
  3 siblings, 1 reply; 13+ messages in thread
From: Tony Nguyen @ 2025-08-12 20:44 UTC (permalink / raw)
  To: Jason Xing, davem, edumazet, kuba, pabeni, horms, andrew+netdev,
	przemyslaw.kitszel, sdf, larysa.zaremba, maciej.fijalkowski
  Cc: intel-wired-lan, netdev, Jason Xing

On 8/12/2025 12:55 AM, Jason Xing wrote:

Hi Jason,

A procedural nit:
iwl-net is for net targeted patches and iwl-next for net-next patches; I 
believe this should be for 'iwl-next'.

Thanks,
Tony

> From: Jason Xing <kernelxing@tencent.com>
> 
> The series mostly follows the development of i40e/ice to improve the
> performance for zerocopy mode in the tx path.
> 
> ---
> V2
> Link: https://lore.kernel.org/intel-wired-lan/20250720091123.474-1-kerneljasonxing@gmail.com/
> 1. remove previous 2nd and last patch.
> 
> Jason Xing (3):
>    ixgbe: xsk: remove budget from ixgbe_clean_xdp_tx_irq
>    ixgbe: xsk: use ixgbe_desc_unused as the budget in ixgbe_xmit_zc
>    ixgbe: xsk: support batched xsk Tx interfaces to increase performance
> 
>   drivers/net/ethernet/intel/ixgbe/ixgbe_main.c |   2 +-
>   .../ethernet/intel/ixgbe/ixgbe_txrx_common.h  |   2 +-
>   drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c  | 113 ++++++++++++------
>   3 files changed, 76 insertions(+), 41 deletions(-)
> 


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH iwl-net v2 0/3] ixgbe: xsk: a couple of changes for zerocopy
  2025-08-12 20:44 ` [PATCH iwl-net v2 0/3] ixgbe: xsk: a couple of changes for zerocopy Tony Nguyen
@ 2025-08-12 23:38   ` Jason Xing
  0 siblings, 0 replies; 13+ messages in thread
From: Jason Xing @ 2025-08-12 23:38 UTC (permalink / raw)
  To: Tony Nguyen
  Cc: davem, edumazet, kuba, pabeni, horms, andrew+netdev,
	przemyslaw.kitszel, sdf, larysa.zaremba, maciej.fijalkowski,
	intel-wired-lan, netdev, Jason Xing

On Wed, Aug 13, 2025 at 4:45 AM Tony Nguyen <anthony.l.nguyen@intel.com> wrote:
>
> On 8/12/2025 12:55 AM, Jason Xing wrote:
>
> Hi Jason,
>
> A procedural nit:
> iwl-net is for net targeted patches and iwl-next for net-next patches; I
> believe this should be for 'iwl-next'.

Hi Tony,

I see. Thanks for reminding me. I will change the subject. (This
series is built on top of the next-queue branch as you pointed out
before.)

Thanks,
Jason

>
> Thanks,
> Tony
>
> > From: Jason Xing <kernelxing@tencent.com>
> >
> > The series mostly follows the development of i40e/ice to improve the
> > performance for zerocopy mode in the tx path.
> >
> > ---
> > V2
> > Link: https://lore.kernel.org/intel-wired-lan/20250720091123.474-1-kerneljasonxing@gmail.com/
> > 1. remove previous 2nd and last patch.
> >
> > Jason Xing (3):
> >    ixgbe: xsk: remove budget from ixgbe_clean_xdp_tx_irq
> >    ixgbe: xsk: use ixgbe_desc_unused as the budget in ixgbe_xmit_zc
> >    ixgbe: xsk: support batched xsk Tx interfaces to increase performance
> >
> >   drivers/net/ethernet/intel/ixgbe/ixgbe_main.c |   2 +-
> >   .../ethernet/intel/ixgbe/ixgbe_txrx_common.h  |   2 +-
> >   drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c  | 113 ++++++++++++------
> >   3 files changed, 76 insertions(+), 41 deletions(-)
> >
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH iwl-net v2 3/3] ixgbe: xsk: support batched xsk Tx interfaces to increase performance
  2025-08-12 15:42   ` Maciej Fijalkowski
@ 2025-08-13  0:34     ` Jason Xing
  2025-08-13 18:08       ` Maciej Fijalkowski
  0 siblings, 1 reply; 13+ messages in thread
From: Jason Xing @ 2025-08-13  0:34 UTC (permalink / raw)
  To: Maciej Fijalkowski
  Cc: davem, edumazet, kuba, pabeni, horms, andrew+netdev,
	anthony.l.nguyen, przemyslaw.kitszel, sdf, larysa.zaremba,
	intel-wired-lan, netdev, Jason Xing

Hi Maciej,

On Tue, Aug 12, 2025 at 11:42 PM Maciej Fijalkowski
<maciej.fijalkowski@intel.com> wrote:
>
> On Tue, Aug 12, 2025 at 03:55:04PM +0800, Jason Xing wrote:
> > From: Jason Xing <kernelxing@tencent.com>
> >
>
> Hi Jason,
>
> patches should be targetted at iwl-next as these are improvements, not
> fixes.

Oh, right.

>
> > Like what i40e driver initially did in commit 3106c580fb7cf
> > ("i40e: Use batched xsk Tx interfaces to increase performance"), use
> > the batched xsk feature to transmit packets.
> >
> > Signed-off-by: Jason Xing <kernelxing@tencent.com>
> > ---
> > In this version, I still choose use the current implementation. Last
> > time at the first glance, I agreed 'i' is useless but it is not.
> > https://lore.kernel.org/intel-wired-lan/CAL+tcoADu-ZZewsZzGDaL7NugxFTWO_Q+7WsLHs3Mx-XHjJnyg@mail.gmail.com/
>
> dare to share the performance improvement (if any, in the current form)?

I tested the whole series, sorry, no actual improvement could be seen
through xdpsock. Not even with the first series. :(

>
> also you have not mentioned in v1->v2 that you dropped the setting of
> xdp_zc_max_segs, which is a step in a correct path.

Oops, I blindly dropped the last patch without carefully checking it.
Thanks for showing me.

I set it as four for ixgbe. I'm not that sure if there is any theory
behind setting this value?

>
> > ---
> >  drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c | 106 +++++++++++++------
> >  1 file changed, 72 insertions(+), 34 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
> > index f3d3f5c1cdc7..9fe2c4bf8bc5 100644
> > --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
> > +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
> > @@ -2,12 +2,15 @@
> >  /* Copyright(c) 2018 Intel Corporation. */
> >
> >  #include <linux/bpf_trace.h>
> > +#include <linux/unroll.h>
> >  #include <net/xdp_sock_drv.h>
> >  #include <net/xdp.h>
> >
> >  #include "ixgbe.h"
> >  #include "ixgbe_txrx_common.h"
> >
> > +#define PKTS_PER_BATCH 4
> > +
> >  struct xsk_buff_pool *ixgbe_xsk_pool(struct ixgbe_adapter *adapter,
> >                                    struct ixgbe_ring *ring)
> >  {
> > @@ -388,58 +391,93 @@ void ixgbe_xsk_clean_rx_ring(struct ixgbe_ring *rx_ring)
> >       }
> >  }
> >
> > -static bool ixgbe_xmit_zc(struct ixgbe_ring *xdp_ring, unsigned int budget)
> > +static void ixgbe_set_rs_bit(struct ixgbe_ring *xdp_ring)
> > +{
> > +     u16 ntu = xdp_ring->next_to_use ? xdp_ring->next_to_use - 1 : xdp_ring->count - 1;
> > +     union ixgbe_adv_tx_desc *tx_desc;
> > +
> > +     tx_desc = IXGBE_TX_DESC(xdp_ring, ntu);
> > +     tx_desc->read.cmd_type_len |= cpu_to_le32(IXGBE_TXD_CMD_RS);
>
> you have not addressed the descriptor cleaning path which makes this
> change rather pointless or even the driver behavior is broken.

Are you referring to 'while (ntc != ntu) {}' in
ixgbe_clean_xdp_tx_irq()? But I see no difference between that part
and the similar part 'for (i = 0; i < completed_frames; i++) {}' in
i40e_clean_xdp_tx_irq()

>
> point of such change is to limit the interrupts raised by HW once it is
> done with sending the descriptor. you still walk the descs one-by-one in
> ixgbe_clean_xdp_tx_irq().

Sorry, I must be missing something important. In my view only at the
end of ixgbe_xmit_zc(), ixgbe always kicks the hardware through
ixgbe_xdp_ring_update_tail() before/after this series.

As to 'one-by-one', I see i40e also handles like that in 'for (i = 0;
i < completed_frames; i++)' in i40e_clean_xdp_tx_irq(). Ice does this
in ice_clean_xdp_irq_zc()?

Could you shed some light on this? Thanks in advance!

Thanks,
Jason

>
> > +}
> > +
> > +static void ixgbe_xmit_pkt(struct ixgbe_ring *xdp_ring, struct xdp_desc *desc,
> > +                        int i)
> > +
> >  {
> >       struct xsk_buff_pool *pool = xdp_ring->xsk_pool;
> >       union ixgbe_adv_tx_desc *tx_desc = NULL;
> >       struct ixgbe_tx_buffer *tx_bi;
> > -     struct xdp_desc desc;
> >       dma_addr_t dma;
> >       u32 cmd_type;
> >
> > -     if (!budget)
> > -             return true;
> > +     dma = xsk_buff_raw_get_dma(pool, desc[i].addr);
> > +     xsk_buff_raw_dma_sync_for_device(pool, dma, desc[i].len);
> >
> > -     while (likely(budget)) {
> > -             if (!netif_carrier_ok(xdp_ring->netdev))
> > -                     break;
> > +     tx_bi = &xdp_ring->tx_buffer_info[xdp_ring->next_to_use];
> > +     tx_bi->bytecount = desc[i].len;
> > +     tx_bi->xdpf = NULL;
> > +     tx_bi->gso_segs = 1;
> >
> > -             if (!xsk_tx_peek_desc(pool, &desc))
> > -                     break;
> > +     tx_desc = IXGBE_TX_DESC(xdp_ring, xdp_ring->next_to_use);
> > +     tx_desc->read.buffer_addr = cpu_to_le64(dma);
> >
> > -             dma = xsk_buff_raw_get_dma(pool, desc.addr);
> > -             xsk_buff_raw_dma_sync_for_device(pool, dma, desc.len);
> > +     cmd_type = IXGBE_ADVTXD_DTYP_DATA |
> > +                IXGBE_ADVTXD_DCMD_DEXT |
> > +                IXGBE_ADVTXD_DCMD_IFCS;
> > +     cmd_type |= desc[i].len | IXGBE_TXD_CMD_EOP;
> > +     tx_desc->read.cmd_type_len = cpu_to_le32(cmd_type);
> > +     tx_desc->read.olinfo_status =
> > +             cpu_to_le32(desc[i].len << IXGBE_ADVTXD_PAYLEN_SHIFT);
> >
> > -             tx_bi = &xdp_ring->tx_buffer_info[xdp_ring->next_to_use];
> > -             tx_bi->bytecount = desc.len;
> > -             tx_bi->xdpf = NULL;
> > -             tx_bi->gso_segs = 1;
> > +     xdp_ring->next_to_use++;
> > +}
> >
> > -             tx_desc = IXGBE_TX_DESC(xdp_ring, xdp_ring->next_to_use);
> > -             tx_desc->read.buffer_addr = cpu_to_le64(dma);
> > +static void ixgbe_xmit_pkt_batch(struct ixgbe_ring *xdp_ring, struct xdp_desc *desc)
> > +{
> > +     u32 i;
> >
> > -             /* put descriptor type bits */
> > -             cmd_type = IXGBE_ADVTXD_DTYP_DATA |
> > -                        IXGBE_ADVTXD_DCMD_DEXT |
> > -                        IXGBE_ADVTXD_DCMD_IFCS;
> > -             cmd_type |= desc.len | IXGBE_TXD_CMD;
> > -             tx_desc->read.cmd_type_len = cpu_to_le32(cmd_type);
> > -             tx_desc->read.olinfo_status =
> > -                     cpu_to_le32(desc.len << IXGBE_ADVTXD_PAYLEN_SHIFT);
> > +     unrolled_count(PKTS_PER_BATCH)
> > +     for (i = 0; i < PKTS_PER_BATCH; i++)
> > +             ixgbe_xmit_pkt(xdp_ring, desc, i);
> > +}
> >
> > -             xdp_ring->next_to_use++;
> > -             if (xdp_ring->next_to_use == xdp_ring->count)
> > -                     xdp_ring->next_to_use = 0;
> > +static void ixgbe_fill_tx_hw_ring(struct ixgbe_ring *xdp_ring,
> > +                               struct xdp_desc *descs, u32 nb_pkts)
> > +{
> > +     u32 batched, leftover, i;
> > +
> > +     batched = nb_pkts & ~(PKTS_PER_BATCH - 1);
> > +     leftover = nb_pkts & (PKTS_PER_BATCH - 1);
> > +     for (i = 0; i < batched; i += PKTS_PER_BATCH)
> > +             ixgbe_xmit_pkt_batch(xdp_ring, &descs[i]);
> > +     for (i = batched; i < batched + leftover; i++)
> > +             ixgbe_xmit_pkt(xdp_ring, &descs[i], 0);
> > +}
> >
> > -             budget--;
> > -     }
> > +static bool ixgbe_xmit_zc(struct ixgbe_ring *xdp_ring, unsigned int budget)
> > +{
> > +     struct xdp_desc *descs = xdp_ring->xsk_pool->tx_descs;
> > +     u32 nb_pkts, nb_processed = 0;
> >
> > -     if (tx_desc) {
> > -             ixgbe_xdp_ring_update_tail(xdp_ring);
> > -             xsk_tx_release(pool);
> > +     if (!netif_carrier_ok(xdp_ring->netdev))
> > +             return true;
> > +
> > +     nb_pkts = xsk_tx_peek_release_desc_batch(xdp_ring->xsk_pool, budget);
> > +     if (!nb_pkts)
> > +             return true;
> > +
> > +     if (xdp_ring->next_to_use + nb_pkts >= xdp_ring->count) {
> > +             nb_processed = xdp_ring->count - xdp_ring->next_to_use;
> > +             ixgbe_fill_tx_hw_ring(xdp_ring, descs, nb_processed);
> > +             xdp_ring->next_to_use = 0;
> >       }
> >
> > -     return !!budget;
> > +     ixgbe_fill_tx_hw_ring(xdp_ring, &descs[nb_processed], nb_pkts - nb_processed);
> > +
> > +     ixgbe_set_rs_bit(xdp_ring);
> > +     ixgbe_xdp_ring_update_tail(xdp_ring);
> > +
> > +     return nb_pkts < budget;
> >  }
> >
> >  static void ixgbe_clean_xdp_tx_buffer(struct ixgbe_ring *tx_ring,
> > --
> > 2.41.3
> >

^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: [Intel-wired-lan] [PATCH iwl-net v2 2/3] ixgbe: xsk: use ixgbe_desc_unused as the budget in ixgbe_xmit_zc
  2025-08-12  7:55 ` [PATCH iwl-net v2 2/3] ixgbe: xsk: use ixgbe_desc_unused as the budget in ixgbe_xmit_zc Jason Xing
@ 2025-08-13 11:14   ` Loktionov, Aleksandr
  2025-08-13 11:44     ` Jason Xing
  0 siblings, 1 reply; 13+ messages in thread
From: Loktionov, Aleksandr @ 2025-08-13 11:14 UTC (permalink / raw)
  To: Jason Xing, davem@davemloft.net, edumazet@google.com,
	kuba@kernel.org, pabeni@redhat.com, horms@kernel.org,
	andrew+netdev@lunn.ch, Nguyen, Anthony L, Kitszel, Przemyslaw,
	sdf@fomichev.me, Zaremba, Larysa, Fijalkowski, Maciej
  Cc: intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org,
	Jason Xing



> -----Original Message-----
> From: Intel-wired-lan <intel-wired-lan-bounces@osuosl.org> On Behalf
> Of Jason Xing
> Sent: Tuesday, August 12, 2025 9:55 AM
> To: davem@davemloft.net; edumazet@google.com; kuba@kernel.org;
> pabeni@redhat.com; horms@kernel.org; andrew+netdev@lunn.ch; Nguyen,
> Anthony L <anthony.l.nguyen@intel.com>; Kitszel, Przemyslaw
> <przemyslaw.kitszel@intel.com>; sdf@fomichev.me; Zaremba, Larysa
> <larysa.zaremba@intel.com>; Fijalkowski, Maciej
> <maciej.fijalkowski@intel.com>
> Cc: intel-wired-lan@lists.osuosl.org; netdev@vger.kernel.org; Jason
> Xing <kernelxing@tencent.com>
> Subject: [Intel-wired-lan] [PATCH iwl-net v2 2/3] ixgbe: xsk: use
> ixgbe_desc_unused as the budget in ixgbe_xmit_zc
> 
> From: Jason Xing <kernelxing@tencent.com>
> 
> - Adjust ixgbe_desc_unused as the budget value.
> - Avoid checking desc_unused over and over again in the loop.
> 
> The patch makes ixgbe follow i40e driver that was done in commit
> 1fd972ebe523 ("i40e: move check of full Tx ring to outside of send
> loop").
> [ Note that the above i40e patch has problem when
> ixgbe_desc_unused(tx_ring)
> returns zero. The zero value as the budget value means we don't have
> any
> possible descs to be sent, so it should return true instead to tell
> the
> napi poll not to launch another poll to handle tx packets. Even
> though
> that patch behaves correctly by returning true in this case, it
> happens
> because of the unexpected underflow of the budget. Taking the
> current
> version of i40e_xmit_zc() as an example, it returns true as
> expected. ]
> Hence, this patch adds a standalone if statement of zero budget in
> front
> of ixgbe_xmit_zc() as explained before.
> 
> Use ixgbe_desc_unused to replace the original fixed budget with the
> number
> of available slots in the Tx ring. It can gain some performance.
You state “It can gain some performance” but provide no numbers
(before/after metrics, hardware, workload, methodology).
The https://www.kernel.org/doc/html/latest/process/submitting-patches.html
ask to quantify optimizations with measurements and discuss trade‑offs.

> 
If the change addresses a behavioral bug (e.g., incorrect NAPI completion behavior when budget is zero),
add Fixes: <sha1> ("commit subject") to help backporting and tracking.

> Signed-off-by: Jason Xing <kernelxing@tencent.com>
> ---
> In this version, I keep it as is (please see the following link)
> https://lore.kernel.org/intel-wired-
> lan/CAL+tcoAUW_J62aw3aGBru+0GmaTjoom1qu8Y=aiSc9EGU09Nww@mail.gmail.c
> om/
> ---
>  drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c | 13 +++++--------
>  1 file changed, 5 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
> b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
> index a463c5ac9c7c..f3d3f5c1cdc7 100644
> --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
> +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
> @@ -393,17 +393,14 @@ static bool ixgbe_xmit_zc(struct ixgbe_ring
> *xdp_ring, unsigned int budget)
>  	struct xsk_buff_pool *pool = xdp_ring->xsk_pool;
>  	union ixgbe_adv_tx_desc *tx_desc = NULL;
>  	struct ixgbe_tx_buffer *tx_bi;
> -	bool work_done = true;
>  	struct xdp_desc desc;
>  	dma_addr_t dma;
>  	u32 cmd_type;
> 
> -	while (likely(budget)) {
> -		if (unlikely(!ixgbe_desc_unused(xdp_ring))) {
> -			work_done = false;
> -			break;
> -		}
> +	if (!budget)
> +		return true;
> 
> +	while (likely(budget)) {
>  		if (!netif_carrier_ok(xdp_ring->netdev))
>  			break;
> 
> @@ -442,7 +439,7 @@ static bool ixgbe_xmit_zc(struct ixgbe_ring
> *xdp_ring, unsigned int budget)
>  		xsk_tx_release(pool);
>  	}
> 
> -	return !!budget && work_done;
> +	return !!budget;
>  }
> 
>  static void ixgbe_clean_xdp_tx_buffer(struct ixgbe_ring *tx_ring,
> @@ -505,7 +502,7 @@ bool ixgbe_clean_xdp_tx_irq(struct
> ixgbe_q_vector *q_vector,
>  	if (xsk_uses_need_wakeup(pool))
>  		xsk_set_tx_need_wakeup(pool);
> 
> -	return ixgbe_xmit_zc(tx_ring, q_vector->tx.work_limit);
> +	return ixgbe_xmit_zc(tx_ring, ixgbe_desc_unused(tx_ring));
>  }
> 
>  int ixgbe_xsk_wakeup(struct net_device *dev, u32 qid, u32 flags)
> --
> 2.41.3


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Intel-wired-lan] [PATCH iwl-net v2 2/3] ixgbe: xsk: use ixgbe_desc_unused as the budget in ixgbe_xmit_zc
  2025-08-13 11:14   ` [Intel-wired-lan] " Loktionov, Aleksandr
@ 2025-08-13 11:44     ` Jason Xing
  0 siblings, 0 replies; 13+ messages in thread
From: Jason Xing @ 2025-08-13 11:44 UTC (permalink / raw)
  To: Loktionov, Aleksandr
  Cc: davem@davemloft.net, edumazet@google.com, kuba@kernel.org,
	pabeni@redhat.com, horms@kernel.org, andrew+netdev@lunn.ch,
	Nguyen, Anthony L, Kitszel, Przemyslaw, sdf@fomichev.me,
	Zaremba, Larysa, Fijalkowski, Maciej,
	intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org,
	Jason Xing

On Wed, Aug 13, 2025 at 7:14 PM Loktionov, Aleksandr
<aleksandr.loktionov@intel.com> wrote:
>
>
>
> > -----Original Message-----
> > From: Intel-wired-lan <intel-wired-lan-bounces@osuosl.org> On Behalf
> > Of Jason Xing
> > Sent: Tuesday, August 12, 2025 9:55 AM
> > To: davem@davemloft.net; edumazet@google.com; kuba@kernel.org;
> > pabeni@redhat.com; horms@kernel.org; andrew+netdev@lunn.ch; Nguyen,
> > Anthony L <anthony.l.nguyen@intel.com>; Kitszel, Przemyslaw
> > <przemyslaw.kitszel@intel.com>; sdf@fomichev.me; Zaremba, Larysa
> > <larysa.zaremba@intel.com>; Fijalkowski, Maciej
> > <maciej.fijalkowski@intel.com>
> > Cc: intel-wired-lan@lists.osuosl.org; netdev@vger.kernel.org; Jason
> > Xing <kernelxing@tencent.com>
> > Subject: [Intel-wired-lan] [PATCH iwl-net v2 2/3] ixgbe: xsk: use
> > ixgbe_desc_unused as the budget in ixgbe_xmit_zc
> >
> > From: Jason Xing <kernelxing@tencent.com>
> >
> > - Adjust ixgbe_desc_unused as the budget value.
> > - Avoid checking desc_unused over and over again in the loop.
> >
> > The patch makes ixgbe follow i40e driver that was done in commit
> > 1fd972ebe523 ("i40e: move check of full Tx ring to outside of send
> > loop").
> > [ Note that the above i40e patch has problem when
> > ixgbe_desc_unused(tx_ring)
> > returns zero. The zero value as the budget value means we don't have
> > any
> > possible descs to be sent, so it should return true instead to tell
> > the
> > napi poll not to launch another poll to handle tx packets. Even
> > though
> > that patch behaves correctly by returning true in this case, it
> > happens
> > because of the unexpected underflow of the budget. Taking the
> > current
> > version of i40e_xmit_zc() as an example, it returns true as
> > expected. ]
> > Hence, this patch adds a standalone if statement of zero budget in
> > front
> > of ixgbe_xmit_zc() as explained before.
> >
> > Use ixgbe_desc_unused to replace the original fixed budget with the
> > number
> > of available slots in the Tx ring. It can gain some performance.
> You state “It can gain some performance” but provide no numbers
> (before/after metrics, hardware, workload, methodology).
> The https://www.kernel.org/doc/html/latest/process/submitting-patches.html
> ask to quantify optimizations with measurements and discuss trade‑offs.

Based on my understanding of performance, there are two kinds: 1) it
can save some cycles and indeed reduce the time but cannot be easily
observed, 2) it can be directly shown through various tests. The whole
series belongs to the former due to limited tests. We cannot deny the
optimization even though we cannot see it from the numbers but we can
conclude it from the theory.

As the official doc requires us to do so, I will remove all the
related words to avoid further confusion in V3. Thanks for sharing it
with me.

>
> >
> If the change addresses a behavioral bug (e.g., incorrect NAPI completion behavior when budget is zero),
> add Fixes: <sha1> ("commit subject") to help backporting and tracking.

Well, it's not a bugfix. I just pointed out that the i40e patch that
has a bug was overwritten/buried by another patch :)

Thanks,
Jason

>
> > Signed-off-by: Jason Xing <kernelxing@tencent.com>
> > ---
> > In this version, I keep it as is (please see the following link)
> > https://lore.kernel.org/intel-wired-
> > lan/CAL+tcoAUW_J62aw3aGBru+0GmaTjoom1qu8Y=aiSc9EGU09Nww@mail.gmail.c
> > om/
> > ---
> >  drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c | 13 +++++--------
> >  1 file changed, 5 insertions(+), 8 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
> > b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
> > index a463c5ac9c7c..f3d3f5c1cdc7 100644
> > --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
> > +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
> > @@ -393,17 +393,14 @@ static bool ixgbe_xmit_zc(struct ixgbe_ring
> > *xdp_ring, unsigned int budget)
> >       struct xsk_buff_pool *pool = xdp_ring->xsk_pool;
> >       union ixgbe_adv_tx_desc *tx_desc = NULL;
> >       struct ixgbe_tx_buffer *tx_bi;
> > -     bool work_done = true;
> >       struct xdp_desc desc;
> >       dma_addr_t dma;
> >       u32 cmd_type;
> >
> > -     while (likely(budget)) {
> > -             if (unlikely(!ixgbe_desc_unused(xdp_ring))) {
> > -                     work_done = false;
> > -                     break;
> > -             }
> > +     if (!budget)
> > +             return true;
> >
> > +     while (likely(budget)) {
> >               if (!netif_carrier_ok(xdp_ring->netdev))
> >                       break;
> >
> > @@ -442,7 +439,7 @@ static bool ixgbe_xmit_zc(struct ixgbe_ring
> > *xdp_ring, unsigned int budget)
> >               xsk_tx_release(pool);
> >       }
> >
> > -     return !!budget && work_done;
> > +     return !!budget;
> >  }
> >
> >  static void ixgbe_clean_xdp_tx_buffer(struct ixgbe_ring *tx_ring,
> > @@ -505,7 +502,7 @@ bool ixgbe_clean_xdp_tx_irq(struct
> > ixgbe_q_vector *q_vector,
> >       if (xsk_uses_need_wakeup(pool))
> >               xsk_set_tx_need_wakeup(pool);
> >
> > -     return ixgbe_xmit_zc(tx_ring, q_vector->tx.work_limit);
> > +     return ixgbe_xmit_zc(tx_ring, ixgbe_desc_unused(tx_ring));
> >  }
> >
> >  int ixgbe_xsk_wakeup(struct net_device *dev, u32 qid, u32 flags)
> > --
> > 2.41.3
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH iwl-net v2 3/3] ixgbe: xsk: support batched xsk Tx interfaces to increase performance
  2025-08-13  0:34     ` Jason Xing
@ 2025-08-13 18:08       ` Maciej Fijalkowski
  2025-08-14  0:33         ` Jason Xing
  0 siblings, 1 reply; 13+ messages in thread
From: Maciej Fijalkowski @ 2025-08-13 18:08 UTC (permalink / raw)
  To: Jason Xing
  Cc: davem, edumazet, kuba, pabeni, horms, andrew+netdev,
	anthony.l.nguyen, przemyslaw.kitszel, sdf, larysa.zaremba,
	intel-wired-lan, netdev, Jason Xing

On Wed, Aug 13, 2025 at 08:34:52AM +0800, Jason Xing wrote:
> Hi Maciej,
> 
> On Tue, Aug 12, 2025 at 11:42 PM Maciej Fijalkowski
> <maciej.fijalkowski@intel.com> wrote:
> >
> > On Tue, Aug 12, 2025 at 03:55:04PM +0800, Jason Xing wrote:
> > > From: Jason Xing <kernelxing@tencent.com>
> > >
> >
> > Hi Jason,
> >
> > patches should be targetted at iwl-next as these are improvements, not
> > fixes.
> 
> Oh, right.
> 
> >
> > > Like what i40e driver initially did in commit 3106c580fb7cf
> > > ("i40e: Use batched xsk Tx interfaces to increase performance"), use
> > > the batched xsk feature to transmit packets.
> > >
> > > Signed-off-by: Jason Xing <kernelxing@tencent.com>
> > > ---
> > > In this version, I still choose use the current implementation. Last
> > > time at the first glance, I agreed 'i' is useless but it is not.
> > > https://lore.kernel.org/intel-wired-lan/CAL+tcoADu-ZZewsZzGDaL7NugxFTWO_Q+7WsLHs3Mx-XHjJnyg@mail.gmail.com/
> >
> > dare to share the performance improvement (if any, in the current form)?
> 
> I tested the whole series, sorry, no actual improvement could be seen
> through xdpsock. Not even with the first series. :(

So if i were you i would hesitate with posting it :P in the past batching
approaches always yielded performance gain.

> 
> >
> > also you have not mentioned in v1->v2 that you dropped the setting of
> > xdp_zc_max_segs, which is a step in a correct path.
> 
> Oops, I blindly dropped the last patch without carefully checking it.
> Thanks for showing me.
> 
> I set it as four for ixgbe. I'm not that sure if there is any theory
> behind setting this value?

you're confusing two different things. xdp_zc_max_segs is related to
multi-buffer support in xsk zc whereas you're referring to loop unrolling
counter.

> 
> >
> > > ---
> > >  drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c | 106 +++++++++++++------
> > >  1 file changed, 72 insertions(+), 34 deletions(-)
> > >
> > > diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
> > > index f3d3f5c1cdc7..9fe2c4bf8bc5 100644
> > > --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
> > > +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
> > > @@ -2,12 +2,15 @@
> > >  /* Copyright(c) 2018 Intel Corporation. */
> > >
> > >  #include <linux/bpf_trace.h>
> > > +#include <linux/unroll.h>
> > >  #include <net/xdp_sock_drv.h>
> > >  #include <net/xdp.h>
> > >
> > >  #include "ixgbe.h"
> > >  #include "ixgbe_txrx_common.h"
> > >
> > > +#define PKTS_PER_BATCH 4
> > > +
> > >  struct xsk_buff_pool *ixgbe_xsk_pool(struct ixgbe_adapter *adapter,
> > >                                    struct ixgbe_ring *ring)
> > >  {
> > > @@ -388,58 +391,93 @@ void ixgbe_xsk_clean_rx_ring(struct ixgbe_ring *rx_ring)
> > >       }
> > >  }
> > >
> > > -static bool ixgbe_xmit_zc(struct ixgbe_ring *xdp_ring, unsigned int budget)
> > > +static void ixgbe_set_rs_bit(struct ixgbe_ring *xdp_ring)
> > > +{
> > > +     u16 ntu = xdp_ring->next_to_use ? xdp_ring->next_to_use - 1 : xdp_ring->count - 1;
> > > +     union ixgbe_adv_tx_desc *tx_desc;
> > > +
> > > +     tx_desc = IXGBE_TX_DESC(xdp_ring, ntu);
> > > +     tx_desc->read.cmd_type_len |= cpu_to_le32(IXGBE_TXD_CMD_RS);
> >
> > you have not addressed the descriptor cleaning path which makes this
> > change rather pointless or even the driver behavior is broken.
> 
> Are you referring to 'while (ntc != ntu) {}' in
> ixgbe_clean_xdp_tx_irq()? But I see no difference between that part
> and the similar part 'for (i = 0; i < completed_frames; i++) {}' in
> i40e_clean_xdp_tx_irq()

	if (likely(!tx_ring->xdp_tx_active)) {
		xsk_frames = completed_frames;
		goto skip;
	}
> 
> >
> > point of such change is to limit the interrupts raised by HW once it is
> > done with sending the descriptor. you still walk the descs one-by-one in
> > ixgbe_clean_xdp_tx_irq().
> 
> Sorry, I must be missing something important. In my view only at the
> end of ixgbe_xmit_zc(), ixgbe always kicks the hardware through
> ixgbe_xdp_ring_update_tail() before/after this series.
> 
> As to 'one-by-one', I see i40e also handles like that in 'for (i = 0;
> i < completed_frames; i++)' in i40e_clean_xdp_tx_irq(). Ice does this
> in ice_clean_xdp_irq_zc()?

i40e does not look up DD bit from descriptor. plus this loop you refer to
is taken only when (see above) xdp_tx_active is not 0 (meaning that there
have been some XDP_TX action on queue and we have to clean the buffer in a
different way).

in general i would advise to look at ice as i40e writes back the tx ring
head which is used in cleaning logic. ice does not have this feature,
neither does ixgbe.

> 
> Could you shed some light on this? Thanks in advance!
> 
> Thanks,
> Jason
> 
> >
> > > +}
> > > +
> > > +static void ixgbe_xmit_pkt(struct ixgbe_ring *xdp_ring, struct xdp_desc *desc,
> > > +                        int i)
> > > +
> > >  {
> > >       struct xsk_buff_pool *pool = xdp_ring->xsk_pool;
> > >       union ixgbe_adv_tx_desc *tx_desc = NULL;
> > >       struct ixgbe_tx_buffer *tx_bi;
> > > -     struct xdp_desc desc;
> > >       dma_addr_t dma;
> > >       u32 cmd_type;
> > >
> > > -     if (!budget)
> > > -             return true;
> > > +     dma = xsk_buff_raw_get_dma(pool, desc[i].addr);
> > > +     xsk_buff_raw_dma_sync_for_device(pool, dma, desc[i].len);
> > >
> > > -     while (likely(budget)) {
> > > -             if (!netif_carrier_ok(xdp_ring->netdev))
> > > -                     break;
> > > +     tx_bi = &xdp_ring->tx_buffer_info[xdp_ring->next_to_use];
> > > +     tx_bi->bytecount = desc[i].len;
> > > +     tx_bi->xdpf = NULL;
> > > +     tx_bi->gso_segs = 1;
> > >
> > > -             if (!xsk_tx_peek_desc(pool, &desc))
> > > -                     break;
> > > +     tx_desc = IXGBE_TX_DESC(xdp_ring, xdp_ring->next_to_use);
> > > +     tx_desc->read.buffer_addr = cpu_to_le64(dma);
> > >
> > > -             dma = xsk_buff_raw_get_dma(pool, desc.addr);
> > > -             xsk_buff_raw_dma_sync_for_device(pool, dma, desc.len);
> > > +     cmd_type = IXGBE_ADVTXD_DTYP_DATA |
> > > +                IXGBE_ADVTXD_DCMD_DEXT |
> > > +                IXGBE_ADVTXD_DCMD_IFCS;
> > > +     cmd_type |= desc[i].len | IXGBE_TXD_CMD_EOP;
> > > +     tx_desc->read.cmd_type_len = cpu_to_le32(cmd_type);
> > > +     tx_desc->read.olinfo_status =
> > > +             cpu_to_le32(desc[i].len << IXGBE_ADVTXD_PAYLEN_SHIFT);
> > >
> > > -             tx_bi = &xdp_ring->tx_buffer_info[xdp_ring->next_to_use];
> > > -             tx_bi->bytecount = desc.len;
> > > -             tx_bi->xdpf = NULL;
> > > -             tx_bi->gso_segs = 1;
> > > +     xdp_ring->next_to_use++;
> > > +}
> > >
> > > -             tx_desc = IXGBE_TX_DESC(xdp_ring, xdp_ring->next_to_use);
> > > -             tx_desc->read.buffer_addr = cpu_to_le64(dma);
> > > +static void ixgbe_xmit_pkt_batch(struct ixgbe_ring *xdp_ring, struct xdp_desc *desc)
> > > +{
> > > +     u32 i;
> > >
> > > -             /* put descriptor type bits */
> > > -             cmd_type = IXGBE_ADVTXD_DTYP_DATA |
> > > -                        IXGBE_ADVTXD_DCMD_DEXT |
> > > -                        IXGBE_ADVTXD_DCMD_IFCS;
> > > -             cmd_type |= desc.len | IXGBE_TXD_CMD;
> > > -             tx_desc->read.cmd_type_len = cpu_to_le32(cmd_type);
> > > -             tx_desc->read.olinfo_status =
> > > -                     cpu_to_le32(desc.len << IXGBE_ADVTXD_PAYLEN_SHIFT);
> > > +     unrolled_count(PKTS_PER_BATCH)
> > > +     for (i = 0; i < PKTS_PER_BATCH; i++)
> > > +             ixgbe_xmit_pkt(xdp_ring, desc, i);
> > > +}
> > >
> > > -             xdp_ring->next_to_use++;
> > > -             if (xdp_ring->next_to_use == xdp_ring->count)
> > > -                     xdp_ring->next_to_use = 0;
> > > +static void ixgbe_fill_tx_hw_ring(struct ixgbe_ring *xdp_ring,
> > > +                               struct xdp_desc *descs, u32 nb_pkts)
> > > +{
> > > +     u32 batched, leftover, i;
> > > +
> > > +     batched = nb_pkts & ~(PKTS_PER_BATCH - 1);
> > > +     leftover = nb_pkts & (PKTS_PER_BATCH - 1);
> > > +     for (i = 0; i < batched; i += PKTS_PER_BATCH)
> > > +             ixgbe_xmit_pkt_batch(xdp_ring, &descs[i]);
> > > +     for (i = batched; i < batched + leftover; i++)
> > > +             ixgbe_xmit_pkt(xdp_ring, &descs[i], 0);
> > > +}
> > >
> > > -             budget--;
> > > -     }
> > > +static bool ixgbe_xmit_zc(struct ixgbe_ring *xdp_ring, unsigned int budget)
> > > +{
> > > +     struct xdp_desc *descs = xdp_ring->xsk_pool->tx_descs;
> > > +     u32 nb_pkts, nb_processed = 0;
> > >
> > > -     if (tx_desc) {
> > > -             ixgbe_xdp_ring_update_tail(xdp_ring);
> > > -             xsk_tx_release(pool);
> > > +     if (!netif_carrier_ok(xdp_ring->netdev))
> > > +             return true;
> > > +
> > > +     nb_pkts = xsk_tx_peek_release_desc_batch(xdp_ring->xsk_pool, budget);
> > > +     if (!nb_pkts)
> > > +             return true;
> > > +
> > > +     if (xdp_ring->next_to_use + nb_pkts >= xdp_ring->count) {
> > > +             nb_processed = xdp_ring->count - xdp_ring->next_to_use;
> > > +             ixgbe_fill_tx_hw_ring(xdp_ring, descs, nb_processed);
> > > +             xdp_ring->next_to_use = 0;
> > >       }
> > >
> > > -     return !!budget;
> > > +     ixgbe_fill_tx_hw_ring(xdp_ring, &descs[nb_processed], nb_pkts - nb_processed);
> > > +
> > > +     ixgbe_set_rs_bit(xdp_ring);
> > > +     ixgbe_xdp_ring_update_tail(xdp_ring);
> > > +
> > > +     return nb_pkts < budget;
> > >  }
> > >
> > >  static void ixgbe_clean_xdp_tx_buffer(struct ixgbe_ring *tx_ring,
> > > --
> > > 2.41.3
> > >

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH iwl-net v2 3/3] ixgbe: xsk: support batched xsk Tx interfaces to increase performance
  2025-08-13 18:08       ` Maciej Fijalkowski
@ 2025-08-14  0:33         ` Jason Xing
  0 siblings, 0 replies; 13+ messages in thread
From: Jason Xing @ 2025-08-14  0:33 UTC (permalink / raw)
  To: Maciej Fijalkowski
  Cc: davem, edumazet, kuba, pabeni, horms, andrew+netdev,
	anthony.l.nguyen, przemyslaw.kitszel, sdf, larysa.zaremba,
	intel-wired-lan, netdev, Jason Xing

On Thu, Aug 14, 2025 at 2:09 AM Maciej Fijalkowski
<maciej.fijalkowski@intel.com> wrote:
>
> On Wed, Aug 13, 2025 at 08:34:52AM +0800, Jason Xing wrote:
> > Hi Maciej,
> >
> > On Tue, Aug 12, 2025 at 11:42 PM Maciej Fijalkowski
> > <maciej.fijalkowski@intel.com> wrote:
> > >
> > > On Tue, Aug 12, 2025 at 03:55:04PM +0800, Jason Xing wrote:
> > > > From: Jason Xing <kernelxing@tencent.com>
> > > >
> > >
> > > Hi Jason,
> > >
> > > patches should be targetted at iwl-next as these are improvements, not
> > > fixes.
> >
> > Oh, right.
> >
> > >
> > > > Like what i40e driver initially did in commit 3106c580fb7cf
> > > > ("i40e: Use batched xsk Tx interfaces to increase performance"), use
> > > > the batched xsk feature to transmit packets.
> > > >
> > > > Signed-off-by: Jason Xing <kernelxing@tencent.com>
> > > > ---
> > > > In this version, I still choose use the current implementation. Last
> > > > time at the first glance, I agreed 'i' is useless but it is not.
> > > > https://lore.kernel.org/intel-wired-lan/CAL+tcoADu-ZZewsZzGDaL7NugxFTWO_Q+7WsLHs3Mx-XHjJnyg@mail.gmail.com/
> > >
> > > dare to share the performance improvement (if any, in the current form)?
> >
> > I tested the whole series, sorry, no actual improvement could be seen
> > through xdpsock. Not even with the first series. :(
>
> So if i were you i would hesitate with posting it :P in the past batching

(I'm definitely not an intel nic expert but still willing to write
some codes on the driver side. I need to study more.)

> approaches always yielded performance gain.

No, I still assume no better numbers can be seen with xdpsock even
with further tweaks. Especially yesterday I saw the zerocopy mode
already hit 70% of full speed, which means in all likelihood that is
the bottleneck. That is also the answer to what you questioned in that
patch[0]. Zerocopy mode for most advanced NICs must be much better for
copy mode except for ixgbe, somehow standing for the maximum
throughput of af_xdp.

[0]: https://lore.kernel.org/all/CAL+tcoAst1xs=xCLykUoj1=Vj-0LtVyK-qrcDyoy4mQrHgW1kg@mail.gmail.com/

>
> >
> > >
> > > also you have not mentioned in v1->v2 that you dropped the setting of
> > > xdp_zc_max_segs, which is a step in a correct path.

In v1, you asked me to give up the multi buffer function[1] so I did
it. Yesterday, I wrongly corrected myself and made me think
xdp_zc_max_segs is related to the batch process.

IIUC, you have these multi buffer patches locally or you decided to
accomplish them?

[1]: https://lore.kernel.org/intel-wired-lan/aINVrP8vrxIkxhZr@boxer/

> >
> > Oops, I blindly dropped the last patch without carefully checking it.
> > Thanks for showing me.
> >
> > I set it as four for ixgbe. I'm not that sure if there is any theory
> > behind setting this value?
>
> you're confusing two different things. xdp_zc_max_segs is related to
> multi-buffer support in xsk zc whereas you're referring to loop unrolling
> counter.

No, actually I'm confusing the idea behind the value of xdp_zc_max_segs.

>
> >
> > >
> > > > ---
> > > >  drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c | 106 +++++++++++++------
> > > >  1 file changed, 72 insertions(+), 34 deletions(-)
> > > >
> > > > diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
> > > > index f3d3f5c1cdc7..9fe2c4bf8bc5 100644
> > > > --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
> > > > +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
> > > > @@ -2,12 +2,15 @@
> > > >  /* Copyright(c) 2018 Intel Corporation. */
> > > >
> > > >  #include <linux/bpf_trace.h>
> > > > +#include <linux/unroll.h>
> > > >  #include <net/xdp_sock_drv.h>
> > > >  #include <net/xdp.h>
> > > >
> > > >  #include "ixgbe.h"
> > > >  #include "ixgbe_txrx_common.h"
> > > >
> > > > +#define PKTS_PER_BATCH 4
> > > > +
> > > >  struct xsk_buff_pool *ixgbe_xsk_pool(struct ixgbe_adapter *adapter,
> > > >                                    struct ixgbe_ring *ring)
> > > >  {
> > > > @@ -388,58 +391,93 @@ void ixgbe_xsk_clean_rx_ring(struct ixgbe_ring *rx_ring)
> > > >       }
> > > >  }
> > > >
> > > > -static bool ixgbe_xmit_zc(struct ixgbe_ring *xdp_ring, unsigned int budget)
> > > > +static void ixgbe_set_rs_bit(struct ixgbe_ring *xdp_ring)
> > > > +{
> > > > +     u16 ntu = xdp_ring->next_to_use ? xdp_ring->next_to_use - 1 : xdp_ring->count - 1;
> > > > +     union ixgbe_adv_tx_desc *tx_desc;
> > > > +
> > > > +     tx_desc = IXGBE_TX_DESC(xdp_ring, ntu);
> > > > +     tx_desc->read.cmd_type_len |= cpu_to_le32(IXGBE_TXD_CMD_RS);
> > >
> > > you have not addressed the descriptor cleaning path which makes this
> > > change rather pointless or even the driver behavior is broken.
> >
> > Are you referring to 'while (ntc != ntu) {}' in
> > ixgbe_clean_xdp_tx_irq()? But I see no difference between that part
> > and the similar part 'for (i = 0; i < completed_frames; i++) {}' in
> > i40e_clean_xdp_tx_irq()
>
>         if (likely(!tx_ring->xdp_tx_active)) {
>                 xsk_frames = completed_frames;
>                 goto skip;
>         }

Thanks for the instruction. I will append a patch similar to this[2]
into the series. It's exactly the one that helps ramping up speed.

[2]:
commit 5574ff7b7b3d864556173bf822796593451a6b8c
Author: Magnus Karlsson <magnus.karlsson@intel.com>
Date:   Tue Jun 23 11:44:16 2020 +0200

    i40e: optimize AF_XDP Tx completion path

    Improve the performance of the AF_XDP zero-copy Tx completion
    path. When there are no XDP buffers being sent using XDP_TX or
    XDP_REDIRECT, we do not have go through the SW ring to clean up any
    entries since the AF_XDP path does not use these. In these cases, just
    fast forward the next-to-use counter and skip going through the SW
    ring. The limit on the maximum number of entries to complete is also
    removed since the algorithm is now O(1). To simplify the code path, the
    maximum number of entries to complete for the XDP path is therefore
    also increased from 256 to 512 (the default number of Tx HW
    descriptors). This should be fine since the completion in the XDP path
    is faster than in the SKB path that has 256 as the maximum number.

> >
> > >
> > > point of such change is to limit the interrupts raised by HW once it is
> > > done with sending the descriptor. you still walk the descs one-by-one in
> > > ixgbe_clean_xdp_tx_irq().
> >
> > Sorry, I must be missing something important. In my view only at the
> > end of ixgbe_xmit_zc(), ixgbe always kicks the hardware through
> > ixgbe_xdp_ring_update_tail() before/after this series.
> >
> > As to 'one-by-one', I see i40e also handles like that in 'for (i = 0;
> > i < completed_frames; i++)' in i40e_clean_xdp_tx_irq(). Ice does this
> > in ice_clean_xdp_irq_zc()?
>
> i40e does not look up DD bit from descriptor. plus this loop you refer to
> is taken only when (see above) xdp_tx_active is not 0 (meaning that there
> have been some XDP_TX action on queue and we have to clean the buffer in a
> different way).

I think until now I know what to do next: implement xdp_tx_active function.

>
> in general i would advise to look at ice as i40e writes back the tx ring
> head which is used in cleaning logic. ice does not have this feature,
> neither does ixgbe.

Thanks. I will also dig into those datasheets that are all I have.

Thanks,
Jason

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2025-08-14  0:33 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-12  7:55 [PATCH iwl-net v2 0/3] ixgbe: xsk: a couple of changes for zerocopy Jason Xing
2025-08-12  7:55 ` [PATCH iwl-net v2 1/3] ixgbe: xsk: remove budget from ixgbe_clean_xdp_tx_irq Jason Xing
2025-08-12 15:42   ` Maciej Fijalkowski
2025-08-12  7:55 ` [PATCH iwl-net v2 2/3] ixgbe: xsk: use ixgbe_desc_unused as the budget in ixgbe_xmit_zc Jason Xing
2025-08-13 11:14   ` [Intel-wired-lan] " Loktionov, Aleksandr
2025-08-13 11:44     ` Jason Xing
2025-08-12  7:55 ` [PATCH iwl-net v2 3/3] ixgbe: xsk: support batched xsk Tx interfaces to increase performance Jason Xing
2025-08-12 15:42   ` Maciej Fijalkowski
2025-08-13  0:34     ` Jason Xing
2025-08-13 18:08       ` Maciej Fijalkowski
2025-08-14  0:33         ` Jason Xing
2025-08-12 20:44 ` [PATCH iwl-net v2 0/3] ixgbe: xsk: a couple of changes for zerocopy Tony Nguyen
2025-08-12 23:38   ` Jason Xing

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).