* [Intel-wired-lan] [PATCH net-next v2 0/3] i40e: improve AF_XDP performance
@ 2020-06-23 9:44 Magnus Karlsson
2020-06-23 9:44 ` [Intel-wired-lan] [PATCH net-next v2 1/3] i40e: optimize AF_XDP Tx completion path Magnus Karlsson
` (2 more replies)
0 siblings, 3 replies; 8+ messages in thread
From: Magnus Karlsson @ 2020-06-23 9:44 UTC (permalink / raw)
To: intel-wired-lan
This small series improves AF_XDP performance for the i40e NIC. The
first patch optimizes the Tx completion path for AF_XDP. The second
one removes a division in the data path for the normal SKB path, XDP
as well as AF_XDP. Finally, the third one moves the test of a full Tx
ring to outside the send loop. Overall, the throughput of the l2fwd
application in xpdsock improves with around 8% on my machine.
v1->v2:
* Removed unnecessary variables in i40e_clean_xdp_tx_irq [Sridhar]
* Added one further optimization to Tx path in a new patch [Sridhar]
* Fixed two API documentation warnings with make W=1
This patch has been applied against commit 8af7b4525acf ("Merge branch
'net-atlantic-additional-A2-features'")
Thanks: Magnus
Magnus Karlsson (3):
i40e: optimize AF_XDP Tx completion path
i40e: eliminate division in napi_poll data path
i40e: move check of full Tx ring to outside of send loop
drivers/net/ethernet/intel/i40e/i40e_txrx.c | 17 ++++++---
drivers/net/ethernet/intel/i40e/i40e_txrx.h | 1 +
drivers/net/ethernet/intel/i40e/i40e_xsk.c | 57 +++++++++++++----------------
drivers/net/ethernet/intel/i40e/i40e_xsk.h | 3 +-
4 files changed, 39 insertions(+), 39 deletions(-)
--
2.7.4
^ permalink raw reply [flat|nested] 8+ messages in thread* [Intel-wired-lan] [PATCH net-next v2 1/3] i40e: optimize AF_XDP Tx completion path 2020-06-23 9:44 [Intel-wired-lan] [PATCH net-next v2 0/3] i40e: improve AF_XDP performance Magnus Karlsson @ 2020-06-23 9:44 ` Magnus Karlsson 2020-06-24 1:08 ` Samudrala, Sridhar 2020-06-25 17:03 ` Bowers, AndrewX 2020-06-23 9:44 ` [Intel-wired-lan] [PATCH net-next v2 2/3] i40e: eliminate division in napi_poll data path Magnus Karlsson 2020-06-23 9:44 ` [Intel-wired-lan] [PATCH net-next v2 3/3] i40e: move check of full Tx ring to outside of send loop Magnus Karlsson 2 siblings, 2 replies; 8+ messages in thread From: Magnus Karlsson @ 2020-06-23 9:44 UTC (permalink / raw) To: intel-wired-lan Improve the performance of the AF_XDP zero-copy Tx completion path. When there are no XDP buffers being sent using XDP_TX or XDP_REDIRECT, we do not have go through the SW ring to clean up any entries since the AF_XDP path does not use these. In these cases, just fast forward the next-to-use counter and skip going through the SW ring. The limit on the maximum number of entries to complete is also removed since the algorithm is now O(1). To simplify the code path, the maximum number of entries to complete for the XDP path is therefore also increased from 256 to 512 (the default number of Tx HW descriptors). This should be fine since the completion in the XDP path is faster than in the SKB path that has 256 as the maximum number. This patch provides around 4% throughput improvement for the l2fwd application in xdpsock on my machine. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> --- drivers/net/ethernet/intel/i40e/i40e_txrx.c | 3 +- drivers/net/ethernet/intel/i40e/i40e_txrx.h | 1 + drivers/net/ethernet/intel/i40e/i40e_xsk.c | 43 +++++++++++++++-------------- drivers/net/ethernet/intel/i40e/i40e_xsk.h | 3 +- 4 files changed, 27 insertions(+), 23 deletions(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c index f9555c8..9334abd 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c @@ -2580,7 +2580,7 @@ int i40e_napi_poll(struct napi_struct *napi, int budget) */ i40e_for_each_ring(ring, q_vector->tx) { bool wd = ring->xsk_umem ? - i40e_clean_xdp_tx_irq(vsi, ring, budget) : + i40e_clean_xdp_tx_irq(vsi, ring) : i40e_clean_tx_irq(vsi, ring, budget); if (!wd) { @@ -3538,6 +3538,7 @@ static int i40e_xmit_xdp_ring(struct xdp_frame *xdpf, */ smp_wmb(); + xdp_ring->xdp_tx_active++; i++; if (i == xdp_ring->count) i = 0; diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.h b/drivers/net/ethernet/intel/i40e/i40e_txrx.h index 5c25597..c16fcd9 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.h +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.h @@ -371,6 +371,7 @@ struct i40e_ring { /* used in interrupt processing */ u16 next_to_use; u16 next_to_clean; + u16 xdp_tx_active; u8 atr_sample_rate; u8 atr_count; diff --git a/drivers/net/ethernet/intel/i40e/i40e_xsk.c b/drivers/net/ethernet/intel/i40e/i40e_xsk.c index 7276580..86635f5 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_xsk.c +++ b/drivers/net/ethernet/intel/i40e/i40e_xsk.c @@ -378,6 +378,7 @@ int i40e_clean_rx_irq_zc(struct i40e_ring *rx_ring, int budget) **/ static bool i40e_xmit_zc(struct i40e_ring *xdp_ring, unsigned int budget) { + unsigned int sent_frames = 0, total_bytes = 0; struct i40e_tx_desc *tx_desc = NULL; struct i40e_tx_buffer *tx_bi; bool work_done = true; @@ -408,6 +409,9 @@ static bool i40e_xmit_zc(struct i40e_ring *xdp_ring, unsigned int budget) | I40E_TX_DESC_CMD_EOP, 0, desc.len, 0); + sent_frames++; + total_bytes += tx_bi->bytecount; + xdp_ring->next_to_use++; if (xdp_ring->next_to_use == xdp_ring->count) xdp_ring->next_to_use = 0; @@ -420,6 +424,7 @@ static bool i40e_xmit_zc(struct i40e_ring *xdp_ring, unsigned int budget) i40e_xdp_ring_update_tail(xdp_ring); xsk_umem_consume_tx_done(xdp_ring->xsk_umem); + i40e_update_tx_stats(xdp_ring, sent_frames, total_bytes); } return !!budget && work_done; @@ -434,6 +439,7 @@ static void i40e_clean_xdp_tx_buffer(struct i40e_ring *tx_ring, struct i40e_tx_buffer *tx_bi) { xdp_return_frame(tx_bi->xdpf); + tx_ring->xdp_tx_active--; dma_unmap_single(tx_ring->dev, dma_unmap_addr(tx_bi, dma), dma_unmap_len(tx_bi, len), DMA_TO_DEVICE); @@ -447,27 +453,25 @@ static void i40e_clean_xdp_tx_buffer(struct i40e_ring *tx_ring, * * Returns true if cleanup/tranmission is done. **/ -bool i40e_clean_xdp_tx_irq(struct i40e_vsi *vsi, - struct i40e_ring *tx_ring, int napi_budget) +bool i40e_clean_xdp_tx_irq(struct i40e_vsi *vsi, struct i40e_ring *tx_ring) { - unsigned int ntc, total_bytes = 0, budget = vsi->work_limit; - u32 i, completed_frames, frames_ready, xsk_frames = 0; + unsigned int ntc, budget = vsi->work_limit; struct xdp_umem *umem = tx_ring->xsk_umem; + u32 i, completed_frames, xsk_frames = 0; u32 head_idx = i40e_get_head(tx_ring); - bool work_done = true, xmit_done; struct i40e_tx_buffer *tx_bi; + bool xmit_done; if (head_idx < tx_ring->next_to_clean) head_idx += tx_ring->count; - frames_ready = head_idx - tx_ring->next_to_clean; + completed_frames = head_idx - tx_ring->next_to_clean; - if (frames_ready == 0) { + if (completed_frames == 0) goto out_xmit; - } else if (frames_ready > budget) { - completed_frames = budget; - work_done = false; - } else { - completed_frames = frames_ready; + + if (likely(!tx_ring->xdp_tx_active)) { + xsk_frames = completed_frames; + goto skip; } ntc = tx_ring->next_to_clean; @@ -475,18 +479,18 @@ bool i40e_clean_xdp_tx_irq(struct i40e_vsi *vsi, for (i = 0; i < completed_frames; i++) { tx_bi = &tx_ring->tx_bi[ntc]; - if (tx_bi->xdpf) + if (tx_bi->xdpf) { i40e_clean_xdp_tx_buffer(tx_ring, tx_bi); - else + tx_bi->xdpf = NULL; + } else { xsk_frames++; - - tx_bi->xdpf = NULL; - total_bytes += tx_bi->bytecount; + } if (++ntc >= tx_ring->count) ntc = 0; } +skip: tx_ring->next_to_clean += completed_frames; if (unlikely(tx_ring->next_to_clean >= tx_ring->count)) tx_ring->next_to_clean -= tx_ring->count; @@ -494,8 +498,7 @@ bool i40e_clean_xdp_tx_irq(struct i40e_vsi *vsi, if (xsk_frames) xsk_umem_complete_tx(umem, xsk_frames); - i40e_arm_wb(tx_ring, vsi, budget); - i40e_update_tx_stats(tx_ring, completed_frames, total_bytes); + i40e_arm_wb(tx_ring, vsi, completed_frames); out_xmit: if (xsk_umem_uses_need_wakeup(tx_ring->xsk_umem)) @@ -503,7 +506,7 @@ bool i40e_clean_xdp_tx_irq(struct i40e_vsi *vsi, xmit_done = i40e_xmit_zc(tx_ring, budget); - return work_done && xmit_done; + return xmit_done; } /** diff --git a/drivers/net/ethernet/intel/i40e/i40e_xsk.h b/drivers/net/ethernet/intel/i40e/i40e_xsk.h index ea919a7d..c524c14 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_xsk.h +++ b/drivers/net/ethernet/intel/i40e/i40e_xsk.h @@ -15,8 +15,7 @@ int i40e_xsk_umem_setup(struct i40e_vsi *vsi, struct xdp_umem *umem, bool i40e_alloc_rx_buffers_zc(struct i40e_ring *rx_ring, u16 cleaned_count); int i40e_clean_rx_irq_zc(struct i40e_ring *rx_ring, int budget); -bool i40e_clean_xdp_tx_irq(struct i40e_vsi *vsi, - struct i40e_ring *tx_ring, int napi_budget); +bool i40e_clean_xdp_tx_irq(struct i40e_vsi *vsi, struct i40e_ring *tx_ring); int i40e_xsk_wakeup(struct net_device *dev, u32 queue_id, u32 flags); int i40e_alloc_rx_bi_zc(struct i40e_ring *rx_ring); void i40e_clear_rx_bi_zc(struct i40e_ring *rx_ring); -- 2.7.4 ^ permalink raw reply related [flat|nested] 8+ messages in thread
* [Intel-wired-lan] [PATCH net-next v2 1/3] i40e: optimize AF_XDP Tx completion path 2020-06-23 9:44 ` [Intel-wired-lan] [PATCH net-next v2 1/3] i40e: optimize AF_XDP Tx completion path Magnus Karlsson @ 2020-06-24 1:08 ` Samudrala, Sridhar 2020-06-25 17:03 ` Bowers, AndrewX 1 sibling, 0 replies; 8+ messages in thread From: Samudrala, Sridhar @ 2020-06-24 1:08 UTC (permalink / raw) To: intel-wired-lan On 6/23/2020 2:44 AM, Magnus Karlsson wrote: > Improve the performance of the AF_XDP zero-copy Tx completion > path. When there are no XDP buffers being sent using XDP_TX or > XDP_REDIRECT, we do not have go through the SW ring to clean up any > entries since the AF_XDP path does not use these. In these cases, just > fast forward the next-to-use counter and skip going through the SW > ring. The limit on the maximum number of entries to complete is also > removed since the algorithm is now O(1). To simplify the code path, the > maximum number of entries to complete for the XDP path is therefore > also increased from 256 to 512 (the default number of Tx HW > descriptors). This should be fine since the completion in the XDP path > is faster than in the SKB path that has 256 as the maximum number. > > This patch provides around 4% throughput improvement for the l2fwd > application in xdpsock on my machine. > > Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Reviewed-by: Sridhar Samudrala <sridhar.samudrala@intel.com> > --- > drivers/net/ethernet/intel/i40e/i40e_txrx.c | 3 +- > drivers/net/ethernet/intel/i40e/i40e_txrx.h | 1 + > drivers/net/ethernet/intel/i40e/i40e_xsk.c | 43 +++++++++++++++-------------- > drivers/net/ethernet/intel/i40e/i40e_xsk.h | 3 +- > 4 files changed, 27 insertions(+), 23 deletions(-) > > diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c > index f9555c8..9334abd 100644 > --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c > +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c > @@ -2580,7 +2580,7 @@ int i40e_napi_poll(struct napi_struct *napi, int budget) > */ > i40e_for_each_ring(ring, q_vector->tx) { > bool wd = ring->xsk_umem ? > - i40e_clean_xdp_tx_irq(vsi, ring, budget) : > + i40e_clean_xdp_tx_irq(vsi, ring) : > i40e_clean_tx_irq(vsi, ring, budget); > > if (!wd) { > @@ -3538,6 +3538,7 @@ static int i40e_xmit_xdp_ring(struct xdp_frame *xdpf, > */ > smp_wmb(); > > + xdp_ring->xdp_tx_active++; > i++; > if (i == xdp_ring->count) > i = 0; > diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.h b/drivers/net/ethernet/intel/i40e/i40e_txrx.h > index 5c25597..c16fcd9 100644 > --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.h > +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.h > @@ -371,6 +371,7 @@ struct i40e_ring { > /* used in interrupt processing */ > u16 next_to_use; > u16 next_to_clean; > + u16 xdp_tx_active; > > u8 atr_sample_rate; > u8 atr_count; > diff --git a/drivers/net/ethernet/intel/i40e/i40e_xsk.c b/drivers/net/ethernet/intel/i40e/i40e_xsk.c > index 7276580..86635f5 100644 > --- a/drivers/net/ethernet/intel/i40e/i40e_xsk.c > +++ b/drivers/net/ethernet/intel/i40e/i40e_xsk.c > @@ -378,6 +378,7 @@ int i40e_clean_rx_irq_zc(struct i40e_ring *rx_ring, int budget) > **/ > static bool i40e_xmit_zc(struct i40e_ring *xdp_ring, unsigned int budget) > { > + unsigned int sent_frames = 0, total_bytes = 0; > struct i40e_tx_desc *tx_desc = NULL; > struct i40e_tx_buffer *tx_bi; > bool work_done = true; > @@ -408,6 +409,9 @@ static bool i40e_xmit_zc(struct i40e_ring *xdp_ring, unsigned int budget) > | I40E_TX_DESC_CMD_EOP, > 0, desc.len, 0); > > + sent_frames++; > + total_bytes += tx_bi->bytecount; > + > xdp_ring->next_to_use++; > if (xdp_ring->next_to_use == xdp_ring->count) > xdp_ring->next_to_use = 0; > @@ -420,6 +424,7 @@ static bool i40e_xmit_zc(struct i40e_ring *xdp_ring, unsigned int budget) > i40e_xdp_ring_update_tail(xdp_ring); > > xsk_umem_consume_tx_done(xdp_ring->xsk_umem); > + i40e_update_tx_stats(xdp_ring, sent_frames, total_bytes); > } > > return !!budget && work_done; > @@ -434,6 +439,7 @@ static void i40e_clean_xdp_tx_buffer(struct i40e_ring *tx_ring, > struct i40e_tx_buffer *tx_bi) > { > xdp_return_frame(tx_bi->xdpf); > + tx_ring->xdp_tx_active--; > dma_unmap_single(tx_ring->dev, > dma_unmap_addr(tx_bi, dma), > dma_unmap_len(tx_bi, len), DMA_TO_DEVICE); > @@ -447,27 +453,25 @@ static void i40e_clean_xdp_tx_buffer(struct i40e_ring *tx_ring, > * > * Returns true if cleanup/tranmission is done. > **/ > -bool i40e_clean_xdp_tx_irq(struct i40e_vsi *vsi, > - struct i40e_ring *tx_ring, int napi_budget) > +bool i40e_clean_xdp_tx_irq(struct i40e_vsi *vsi, struct i40e_ring *tx_ring) > { > - unsigned int ntc, total_bytes = 0, budget = vsi->work_limit; > - u32 i, completed_frames, frames_ready, xsk_frames = 0; > + unsigned int ntc, budget = vsi->work_limit; > struct xdp_umem *umem = tx_ring->xsk_umem; > + u32 i, completed_frames, xsk_frames = 0; > u32 head_idx = i40e_get_head(tx_ring); > - bool work_done = true, xmit_done; > struct i40e_tx_buffer *tx_bi; > + bool xmit_done; > > if (head_idx < tx_ring->next_to_clean) > head_idx += tx_ring->count; > - frames_ready = head_idx - tx_ring->next_to_clean; > + completed_frames = head_idx - tx_ring->next_to_clean; > > - if (frames_ready == 0) { > + if (completed_frames == 0) > goto out_xmit; > - } else if (frames_ready > budget) { > - completed_frames = budget; > - work_done = false; > - } else { > - completed_frames = frames_ready; > + > + if (likely(!tx_ring->xdp_tx_active)) { > + xsk_frames = completed_frames; > + goto skip; > } > > ntc = tx_ring->next_to_clean; > @@ -475,18 +479,18 @@ bool i40e_clean_xdp_tx_irq(struct i40e_vsi *vsi, > for (i = 0; i < completed_frames; i++) { > tx_bi = &tx_ring->tx_bi[ntc]; > > - if (tx_bi->xdpf) > + if (tx_bi->xdpf) { > i40e_clean_xdp_tx_buffer(tx_ring, tx_bi); > - else > + tx_bi->xdpf = NULL; > + } else { > xsk_frames++; > - > - tx_bi->xdpf = NULL; > - total_bytes += tx_bi->bytecount; > + } > > if (++ntc >= tx_ring->count) > ntc = 0; > } > > +skip: > tx_ring->next_to_clean += completed_frames; > if (unlikely(tx_ring->next_to_clean >= tx_ring->count)) > tx_ring->next_to_clean -= tx_ring->count; > @@ -494,8 +498,7 @@ bool i40e_clean_xdp_tx_irq(struct i40e_vsi *vsi, > if (xsk_frames) > xsk_umem_complete_tx(umem, xsk_frames); > > - i40e_arm_wb(tx_ring, vsi, budget); > - i40e_update_tx_stats(tx_ring, completed_frames, total_bytes); > + i40e_arm_wb(tx_ring, vsi, completed_frames); > > out_xmit: > if (xsk_umem_uses_need_wakeup(tx_ring->xsk_umem)) > @@ -503,7 +506,7 @@ bool i40e_clean_xdp_tx_irq(struct i40e_vsi *vsi, > > xmit_done = i40e_xmit_zc(tx_ring, budget); > > - return work_done && xmit_done; > + return xmit_done; > } > > /** > diff --git a/drivers/net/ethernet/intel/i40e/i40e_xsk.h b/drivers/net/ethernet/intel/i40e/i40e_xsk.h > index ea919a7d..c524c14 100644 > --- a/drivers/net/ethernet/intel/i40e/i40e_xsk.h > +++ b/drivers/net/ethernet/intel/i40e/i40e_xsk.h > @@ -15,8 +15,7 @@ int i40e_xsk_umem_setup(struct i40e_vsi *vsi, struct xdp_umem *umem, > bool i40e_alloc_rx_buffers_zc(struct i40e_ring *rx_ring, u16 cleaned_count); > int i40e_clean_rx_irq_zc(struct i40e_ring *rx_ring, int budget); > > -bool i40e_clean_xdp_tx_irq(struct i40e_vsi *vsi, > - struct i40e_ring *tx_ring, int napi_budget); > +bool i40e_clean_xdp_tx_irq(struct i40e_vsi *vsi, struct i40e_ring *tx_ring); > int i40e_xsk_wakeup(struct net_device *dev, u32 queue_id, u32 flags); > int i40e_alloc_rx_bi_zc(struct i40e_ring *rx_ring); > void i40e_clear_rx_bi_zc(struct i40e_ring *rx_ring); > ^ permalink raw reply [flat|nested] 8+ messages in thread
* [Intel-wired-lan] [PATCH net-next v2 1/3] i40e: optimize AF_XDP Tx completion path 2020-06-23 9:44 ` [Intel-wired-lan] [PATCH net-next v2 1/3] i40e: optimize AF_XDP Tx completion path Magnus Karlsson 2020-06-24 1:08 ` Samudrala, Sridhar @ 2020-06-25 17:03 ` Bowers, AndrewX 1 sibling, 0 replies; 8+ messages in thread From: Bowers, AndrewX @ 2020-06-25 17:03 UTC (permalink / raw) To: intel-wired-lan > -----Original Message----- > From: Intel-wired-lan <intel-wired-lan-bounces@osuosl.org> On Behalf Of > Magnus Karlsson > Sent: Tuesday, June 23, 2020 2:44 AM > To: Karlsson, Magnus <magnus.karlsson@intel.com>; Topel, Bjorn > <bjorn.topel@intel.com>; intel-wired-lan at lists.osuosl.org; Kirsher, Jeffrey T > <jeffrey.t.kirsher@intel.com>; Samudrala, Sridhar > <sridhar.samudrala@intel.com> > Cc: maciejromanfijalkowski at gmail.com; Fijalkowski, Maciej > <maciej.fijalkowski@intel.com>; netdev at vger.kernel.org > Subject: [Intel-wired-lan] [PATCH net-next v2 1/3] i40e: optimize AF_XDP Tx > completion path > > Improve the performance of the AF_XDP zero-copy Tx completion path. > When there are no XDP buffers being sent using XDP_TX or XDP_REDIRECT, > we do not have go through the SW ring to clean up any entries since the > AF_XDP path does not use these. In these cases, just fast forward the next- > to-use counter and skip going through the SW ring. The limit on the > maximum number of entries to complete is also removed since the algorithm > is now O(1). To simplify the code path, the maximum number of entries to > complete for the XDP path is therefore also increased from 256 to 512 (the > default number of Tx HW descriptors). This should be fine since the > completion in the XDP path is faster than in the SKB path that has 256 as the > maximum number. > > This patch provides around 4% throughput improvement for the l2fwd > application in xdpsock on my machine. > > Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> > --- > drivers/net/ethernet/intel/i40e/i40e_txrx.c | 3 +- > drivers/net/ethernet/intel/i40e/i40e_txrx.h | 1 + > drivers/net/ethernet/intel/i40e/i40e_xsk.c | 43 +++++++++++++++---------- > ---- drivers/net/ethernet/intel/i40e/i40e_xsk.h | 3 +- > 4 files changed, 27 insertions(+), 23 deletions(-) Tested-by: Andrew Bowers <andrewx.bowers@intel.com> ^ permalink raw reply [flat|nested] 8+ messages in thread
* [Intel-wired-lan] [PATCH net-next v2 2/3] i40e: eliminate division in napi_poll data path 2020-06-23 9:44 [Intel-wired-lan] [PATCH net-next v2 0/3] i40e: improve AF_XDP performance Magnus Karlsson 2020-06-23 9:44 ` [Intel-wired-lan] [PATCH net-next v2 1/3] i40e: optimize AF_XDP Tx completion path Magnus Karlsson @ 2020-06-23 9:44 ` Magnus Karlsson 2020-06-25 17:03 ` Bowers, AndrewX 2020-06-23 9:44 ` [Intel-wired-lan] [PATCH net-next v2 3/3] i40e: move check of full Tx ring to outside of send loop Magnus Karlsson 2 siblings, 1 reply; 8+ messages in thread From: Magnus Karlsson @ 2020-06-23 9:44 UTC (permalink / raw) To: intel-wired-lan Eliminate a division in the napi_poll data path. This division is executed even though it is only needed in the rare case when there are not enough interrupt lines so they have to be shared between queue pairs. Instead, just test for this case and only execute the division if needed. The code has been lifted from the ice driver. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> --- drivers/net/ethernet/intel/i40e/i40e_txrx.c | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c index 9334abd..60fa102 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c @@ -2595,10 +2595,16 @@ int i40e_napi_poll(struct napi_struct *napi, int budget) if (budget <= 0) goto tx_only; - /* We attempt to distribute budget to each Rx queue fairly, but don't - * allow the budget to go below 1 because that would exit polling early. - */ - budget_per_ring = max(budget/q_vector->num_ringpairs, 1); + /* normally we have 1 Rx ring per q_vector */ + if (unlikely(q_vector->num_ringpairs > 1)) + /* We attempt to distribute budget to each Rx queue fairly, but + * don't allow the budget to go below 1 because that would exit + * polling early. + */ + budget_per_ring = max_t(int, budget / q_vector->num_ringpairs, 1); + else + /* Max of 1 Rx ring in this q_vector so give it the budget */ + budget_per_ring = budget; i40e_for_each_ring(ring, q_vector->rx) { int cleaned = ring->xsk_umem ? -- 2.7.4 ^ permalink raw reply related [flat|nested] 8+ messages in thread
* [Intel-wired-lan] [PATCH net-next v2 2/3] i40e: eliminate division in napi_poll data path 2020-06-23 9:44 ` [Intel-wired-lan] [PATCH net-next v2 2/3] i40e: eliminate division in napi_poll data path Magnus Karlsson @ 2020-06-25 17:03 ` Bowers, AndrewX 0 siblings, 0 replies; 8+ messages in thread From: Bowers, AndrewX @ 2020-06-25 17:03 UTC (permalink / raw) To: intel-wired-lan > -----Original Message----- > From: Intel-wired-lan <intel-wired-lan-bounces@osuosl.org> On Behalf Of > Magnus Karlsson > Sent: Tuesday, June 23, 2020 2:44 AM > To: Karlsson, Magnus <magnus.karlsson@intel.com>; Topel, Bjorn > <bjorn.topel@intel.com>; intel-wired-lan at lists.osuosl.org; Kirsher, Jeffrey T > <jeffrey.t.kirsher@intel.com>; Samudrala, Sridhar > <sridhar.samudrala@intel.com> > Cc: maciejromanfijalkowski at gmail.com; Fijalkowski, Maciej > <maciej.fijalkowski@intel.com>; netdev at vger.kernel.org > Subject: [Intel-wired-lan] [PATCH net-next v2 2/3] i40e: eliminate division in > napi_poll data path > > Eliminate a division in the napi_poll data path. This division is executed even > though it is only needed in the rare case when there are not enough > interrupt lines so they have to be shared between queue pairs. Instead, just > test for this case and only execute the division if needed. The code has been > lifted from the ice driver. > > Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> > --- > drivers/net/ethernet/intel/i40e/i40e_txrx.c | 14 ++++++++++---- > 1 file changed, 10 insertions(+), 4 deletions(-) Tested-by: Andrew Bowers <andrewx.bowers@intel.com> ^ permalink raw reply [flat|nested] 8+ messages in thread
* [Intel-wired-lan] [PATCH net-next v2 3/3] i40e: move check of full Tx ring to outside of send loop 2020-06-23 9:44 [Intel-wired-lan] [PATCH net-next v2 0/3] i40e: improve AF_XDP performance Magnus Karlsson 2020-06-23 9:44 ` [Intel-wired-lan] [PATCH net-next v2 1/3] i40e: optimize AF_XDP Tx completion path Magnus Karlsson 2020-06-23 9:44 ` [Intel-wired-lan] [PATCH net-next v2 2/3] i40e: eliminate division in napi_poll data path Magnus Karlsson @ 2020-06-23 9:44 ` Magnus Karlsson 2020-06-25 17:04 ` Bowers, AndrewX 2 siblings, 1 reply; 8+ messages in thread From: Magnus Karlsson @ 2020-06-23 9:44 UTC (permalink / raw) To: intel-wired-lan Move the check if the Hw Tx ring is full to outside the send loop. Currently it is checked for every single descriptor that we send. Instead, tell the send loop to only process a maximum number of packets equal to the number of available slots in the Tx ring. This way, we can remove the check inside the send loop to and gain some performance. Suggested-by: Sridhar Samudrala <sridhar.samudrala@intel.com> Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> --- drivers/net/ethernet/intel/i40e/i40e_xsk.c | 20 +++++--------------- 1 file changed, 5 insertions(+), 15 deletions(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e_xsk.c b/drivers/net/ethernet/intel/i40e/i40e_xsk.c index 86635f5..081783a 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_xsk.c +++ b/drivers/net/ethernet/intel/i40e/i40e_xsk.c @@ -381,17 +381,10 @@ static bool i40e_xmit_zc(struct i40e_ring *xdp_ring, unsigned int budget) unsigned int sent_frames = 0, total_bytes = 0; struct i40e_tx_desc *tx_desc = NULL; struct i40e_tx_buffer *tx_bi; - bool work_done = true; struct xdp_desc desc; dma_addr_t dma; while (budget-- > 0) { - if (!unlikely(I40E_DESC_UNUSED(xdp_ring))) { - xdp_ring->tx_stats.tx_busy++; - work_done = false; - break; - } - if (!xsk_umem_consume_tx(xdp_ring->xsk_umem, &desc)) break; @@ -427,7 +420,7 @@ static bool i40e_xmit_zc(struct i40e_ring *xdp_ring, unsigned int budget) i40e_update_tx_stats(xdp_ring, sent_frames, total_bytes); } - return !!budget && work_done; + return !!budget; } /** @@ -448,19 +441,18 @@ static void i40e_clean_xdp_tx_buffer(struct i40e_ring *tx_ring, /** * i40e_clean_xdp_tx_irq - Completes AF_XDP entries, and cleans XDP entries + * @vsi: Current VSI * @tx_ring: XDP Tx ring - * @tx_bi: Tx buffer info to clean * * Returns true if cleanup/tranmission is done. **/ bool i40e_clean_xdp_tx_irq(struct i40e_vsi *vsi, struct i40e_ring *tx_ring) { - unsigned int ntc, budget = vsi->work_limit; struct xdp_umem *umem = tx_ring->xsk_umem; u32 i, completed_frames, xsk_frames = 0; u32 head_idx = i40e_get_head(tx_ring); struct i40e_tx_buffer *tx_bi; - bool xmit_done; + unsigned int ntc; if (head_idx < tx_ring->next_to_clean) head_idx += tx_ring->count; @@ -504,9 +496,7 @@ bool i40e_clean_xdp_tx_irq(struct i40e_vsi *vsi, struct i40e_ring *tx_ring) if (xsk_umem_uses_need_wakeup(tx_ring->xsk_umem)) xsk_set_tx_need_wakeup(tx_ring->xsk_umem); - xmit_done = i40e_xmit_zc(tx_ring, budget); - - return xmit_done; + return i40e_xmit_zc(tx_ring, I40E_DESC_UNUSED(tx_ring)); } /** @@ -570,7 +560,7 @@ void i40e_xsk_clean_rx_ring(struct i40e_ring *rx_ring) /** * i40e_xsk_clean_xdp_ring - Clean the XDP Tx ring on shutdown - * @xdp_ring: XDP Tx ring + * @tx_ring: XDP Tx ring **/ void i40e_xsk_clean_tx_ring(struct i40e_ring *tx_ring) { -- 2.7.4 ^ permalink raw reply related [flat|nested] 8+ messages in thread
* [Intel-wired-lan] [PATCH net-next v2 3/3] i40e: move check of full Tx ring to outside of send loop 2020-06-23 9:44 ` [Intel-wired-lan] [PATCH net-next v2 3/3] i40e: move check of full Tx ring to outside of send loop Magnus Karlsson @ 2020-06-25 17:04 ` Bowers, AndrewX 0 siblings, 0 replies; 8+ messages in thread From: Bowers, AndrewX @ 2020-06-25 17:04 UTC (permalink / raw) To: intel-wired-lan > -----Original Message----- > From: Intel-wired-lan <intel-wired-lan-bounces@osuosl.org> On Behalf Of > Magnus Karlsson > Sent: Tuesday, June 23, 2020 2:44 AM > To: Karlsson, Magnus <magnus.karlsson@intel.com>; Topel, Bjorn > <bjorn.topel@intel.com>; intel-wired-lan at lists.osuosl.org; Kirsher, Jeffrey T > <jeffrey.t.kirsher@intel.com>; Samudrala, Sridhar > <sridhar.samudrala@intel.com> > Cc: maciejromanfijalkowski at gmail.com; Fijalkowski, Maciej > <maciej.fijalkowski@intel.com>; netdev at vger.kernel.org > Subject: [Intel-wired-lan] [PATCH net-next v2 3/3] i40e: move check of full Tx > ring to outside of send loop > > Move the check if the Hw Tx ring is full to outside the send loop. Currently it > is checked for every single descriptor that we send. Instead, tell the send > loop to only process a maximum number of packets equal to the number of > available slots in the Tx ring. This way, we can remove the check inside the > send loop to and gain some performance. > > Suggested-by: Sridhar Samudrala <sridhar.samudrala@intel.com> > Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> > --- > drivers/net/ethernet/intel/i40e/i40e_xsk.c | 20 +++++--------------- > 1 file changed, 5 insertions(+), 15 deletions(-) Tested-by: Andrew Bowers <andrewx.bowers@intel.com> ^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2020-06-25 17:04 UTC | newest] Thread overview: 8+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2020-06-23 9:44 [Intel-wired-lan] [PATCH net-next v2 0/3] i40e: improve AF_XDP performance Magnus Karlsson 2020-06-23 9:44 ` [Intel-wired-lan] [PATCH net-next v2 1/3] i40e: optimize AF_XDP Tx completion path Magnus Karlsson 2020-06-24 1:08 ` Samudrala, Sridhar 2020-06-25 17:03 ` Bowers, AndrewX 2020-06-23 9:44 ` [Intel-wired-lan] [PATCH net-next v2 2/3] i40e: eliminate division in napi_poll data path Magnus Karlsson 2020-06-25 17:03 ` Bowers, AndrewX 2020-06-23 9:44 ` [Intel-wired-lan] [PATCH net-next v2 3/3] i40e: move check of full Tx ring to outside of send loop Magnus Karlsson 2020-06-25 17:04 ` Bowers, AndrewX
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox