* [PATCH intel-net 0/2] ice: xsk: ZC changes
@ 2022-08-30 12:38 Maciej Fijalkowski
2022-08-30 12:38 ` [PATCH intel-net 1/2] ice: xsk: change batched Tx descriptor cleaning Maciej Fijalkowski
2022-08-30 12:38 ` [PATCH intel-net 2/2] ice: xsk: drop power of 2 ring size restriction for AF_XDP Maciej Fijalkowski
0 siblings, 2 replies; 5+ messages in thread
From: Maciej Fijalkowski @ 2022-08-30 12:38 UTC (permalink / raw)
To: intel-wired-lan
Cc: netdev, bpf, anthony.l.nguyen, magnus.karlsson,
Maciej Fijalkowski
Hi,
this set consists of two fixes to issues that were either pointed out on
indirectly (John was reviewing AF_XDP selftests that were testing ice's
ZC support) mailing list or were directly reported by customers.
First patch allows user space to see done descriptor in CQ even after a
single frame being transmitted and second patch removes the need for
having HW rings sized to power of 2 number of descriptors when used
against AF_XDP.
Thanks!
Maciej
Maciej Fijalkowski (2):
ice: xsk: change batched Tx descriptor cleaning
ice: xsk: drop power of 2 ring size restriction for AF_XDP
drivers/net/ethernet/intel/ice/ice_txrx.c | 2 +-
drivers/net/ethernet/intel/ice/ice_xsk.c | 165 +++++++++-------------
drivers/net/ethernet/intel/ice/ice_xsk.h | 7 +-
3 files changed, 72 insertions(+), 102 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH intel-net 1/2] ice: xsk: change batched Tx descriptor cleaning
2022-08-30 12:38 [PATCH intel-net 0/2] ice: xsk: ZC changes Maciej Fijalkowski
@ 2022-08-30 12:38 ` Maciej Fijalkowski
2022-08-30 12:43 ` Maciej Fijalkowski
2022-08-30 12:38 ` [PATCH intel-net 2/2] ice: xsk: drop power of 2 ring size restriction for AF_XDP Maciej Fijalkowski
1 sibling, 1 reply; 5+ messages in thread
From: Maciej Fijalkowski @ 2022-08-30 12:38 UTC (permalink / raw)
To: intel-wired-lan
Cc: netdev, bpf, anthony.l.nguyen, magnus.karlsson,
Maciej Fijalkowski
AF_XDP Tx descriptor cleaning in ice driver currently works in a "lazy"
way - descriptors are not cleaned immediately after send. We rather hold
on with cleaning until we see that free space in ring drops below
particular threshold. This was supposed to reduce the amount of
unnecessary work related to cleaning and instead of keeping the ring
empty, ring was rather saturated.
In AF_XDP realm cleaning Tx descriptors implies producing them to CQ.
This is a way of letting know user space that particular descriptor has
been sent, as John points out in [0].
We tried to implement serial descriptor cleaning which would be used in
conjunction with batched cleaning but it made code base more convoluted
and probably harder to maintain in future. Therefore we step away from
batched cleaning in a current form in favor of an approach where we set
RS bit on every last descriptor from a batch and clean always at the
beginning of ice_xmit_zc().
This means that we give up a bit of Tx performance, but this doesn't
hurt l2fwd scenario due to the fact that Tx side is much faster than Rx
and Rx is the one that has to catch Tx up. txonly can be treaten as
AF_XDP based packet generator.
FWIW Tx descriptors are still produced in a batched way.
Fixes: 126cdfe1007a ("ice: xsk: Improve AF_XDP ZC Tx and use batching API")
[0]: https://lore.kernel.org/bpf/62b0a20232920_3573208ab@john.notmuch/
Fixes: 126cdfe1007a ("ice: xsk: Improve AF_XDP ZC Tx and use batching API")
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
---
drivers/net/ethernet/intel/ice/ice_txrx.c | 2 +-
drivers/net/ethernet/intel/ice/ice_xsk.c | 143 +++++++++-------------
drivers/net/ethernet/intel/ice/ice_xsk.h | 7 +-
3 files changed, 64 insertions(+), 88 deletions(-)
diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c
index 836dce840712..b97d34d0741f 100644
--- a/drivers/net/ethernet/intel/ice/ice_txrx.c
+++ b/drivers/net/ethernet/intel/ice/ice_txrx.c
@@ -1464,7 +1464,7 @@ int ice_napi_poll(struct napi_struct *napi, int budget)
bool wd;
if (tx_ring->xsk_pool)
- wd = ice_xmit_zc(tx_ring, ICE_DESC_UNUSED(tx_ring), budget);
+ wd = ice_xmit_zc(tx_ring);
else if (ice_ring_is_xdp(tx_ring))
wd = true;
else
diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c
index 49ba8bfdbf04..46efe72d1342 100644
--- a/drivers/net/ethernet/intel/ice/ice_xsk.c
+++ b/drivers/net/ethernet/intel/ice/ice_xsk.c
@@ -719,69 +719,57 @@ ice_clean_xdp_tx_buf(struct ice_tx_ring *xdp_ring, struct ice_tx_buf *tx_buf)
}
/**
- * ice_clean_xdp_irq_zc - Reclaim resources after transmit completes on XDP ring
- * @xdp_ring: XDP ring to clean
- * @napi_budget: amount of descriptors that NAPI allows us to clean
- *
- * Returns count of cleaned descriptors
+ * ice_clean_xdp_irq_zc - produce AF_XDP descriptors to CQ
+ * @xdp_ring: XDP Tx ring
*/
-static u16 ice_clean_xdp_irq_zc(struct ice_tx_ring *xdp_ring, int napi_budget)
+static void ice_clean_xdp_irq_zc(struct ice_tx_ring *xdp_ring)
{
- u16 tx_thresh = ICE_RING_QUARTER(xdp_ring);
- int budget = napi_budget / tx_thresh;
- u16 next_dd = xdp_ring->next_dd;
- u16 ntc, cleared_dds = 0;
-
- do {
- struct ice_tx_desc *next_dd_desc;
- u16 desc_cnt = xdp_ring->count;
- struct ice_tx_buf *tx_buf;
- u32 xsk_frames;
- u16 i;
-
- next_dd_desc = ICE_TX_DESC(xdp_ring, next_dd);
- if (!(next_dd_desc->cmd_type_offset_bsz &
- cpu_to_le64(ICE_TX_DESC_DTYPE_DESC_DONE)))
- break;
+ u16 ntc = xdp_ring->next_to_clean;
+ struct ice_tx_desc *tx_desc;
+ u16 cnt = xdp_ring->count;
+ struct ice_tx_buf *tx_buf;
+ u16 xsk_frames = 0;
+ u16 last_rs;
+ int i;
- cleared_dds++;
- xsk_frames = 0;
- if (likely(!xdp_ring->xdp_tx_active)) {
- xsk_frames = tx_thresh;
- goto skip;
- }
+ last_rs = xdp_ring->next_to_use ? xdp_ring->next_to_use - 1 : cnt - 1;
+ tx_desc = ICE_TX_DESC(xdp_ring, last_rs);
+ if ((tx_desc->cmd_type_offset_bsz &
+ cpu_to_le64(ICE_TX_DESC_DTYPE_DESC_DONE))) {
+ if (last_rs >= ntc)
+ xsk_frames = last_rs - ntc + 1;
+ else
+ xsk_frames = last_rs + cnt - ntc + 1;
+ }
- ntc = xdp_ring->next_to_clean;
+ if (!xsk_frames)
+ return;
- for (i = 0; i < tx_thresh; i++) {
- tx_buf = &xdp_ring->tx_buf[ntc];
+ if (likely(!xdp_ring->xdp_tx_active))
+ goto skip;
- if (tx_buf->raw_buf) {
- ice_clean_xdp_tx_buf(xdp_ring, tx_buf);
- tx_buf->raw_buf = NULL;
- } else {
- xsk_frames++;
- }
+ ntc = xdp_ring->next_to_clean;
+ for (i = 0; i < xsk_frames; i++) {
+ tx_buf = &xdp_ring->tx_buf[ntc];
- ntc++;
- if (ntc >= xdp_ring->count)
- ntc = 0;
+ if (tx_buf->raw_buf) {
+ ice_clean_xdp_tx_buf(xdp_ring, tx_buf);
+ tx_buf->raw_buf = NULL;
+ } else {
+ xsk_frames++;
}
+
+ ntc++;
+ if (ntc >= xdp_ring->count)
+ ntc = 0;
+ }
skip:
- xdp_ring->next_to_clean += tx_thresh;
- if (xdp_ring->next_to_clean >= desc_cnt)
- xdp_ring->next_to_clean -= desc_cnt;
- if (xsk_frames)
- xsk_tx_completed(xdp_ring->xsk_pool, xsk_frames);
- next_dd_desc->cmd_type_offset_bsz = 0;
- next_dd = next_dd + tx_thresh;
- if (next_dd >= desc_cnt)
- next_dd = tx_thresh - 1;
- } while (--budget);
-
- xdp_ring->next_dd = next_dd;
-
- return cleared_dds * tx_thresh;
+ tx_desc->cmd_type_offset_bsz = 0;
+ xdp_ring->next_to_clean += xsk_frames;
+ if (xdp_ring->next_to_clean >= cnt)
+ xdp_ring->next_to_clean -= cnt;
+ if (xsk_frames)
+ xsk_tx_completed(xdp_ring->xsk_pool, xsk_frames);
}
/**
@@ -816,7 +804,6 @@ static void ice_xmit_pkt(struct ice_tx_ring *xdp_ring, struct xdp_desc *desc,
static void ice_xmit_pkt_batch(struct ice_tx_ring *xdp_ring, struct xdp_desc *descs,
unsigned int *total_bytes)
{
- u16 tx_thresh = ICE_RING_QUARTER(xdp_ring);
u16 ntu = xdp_ring->next_to_use;
struct ice_tx_desc *tx_desc;
u32 i;
@@ -836,13 +823,6 @@ static void ice_xmit_pkt_batch(struct ice_tx_ring *xdp_ring, struct xdp_desc *de
}
xdp_ring->next_to_use = ntu;
-
- if (xdp_ring->next_to_use > xdp_ring->next_rs) {
- tx_desc = ICE_TX_DESC(xdp_ring, xdp_ring->next_rs);
- tx_desc->cmd_type_offset_bsz |=
- cpu_to_le64(ICE_TX_DESC_CMD_RS << ICE_TXD_QW1_CMD_S);
- xdp_ring->next_rs += tx_thresh;
- }
}
/**
@@ -855,7 +835,6 @@ static void ice_xmit_pkt_batch(struct ice_tx_ring *xdp_ring, struct xdp_desc *de
static void ice_fill_tx_hw_ring(struct ice_tx_ring *xdp_ring, struct xdp_desc *descs,
u32 nb_pkts, unsigned int *total_bytes)
{
- u16 tx_thresh = ICE_RING_QUARTER(xdp_ring);
u32 batched, leftover, i;
batched = ALIGN_DOWN(nb_pkts, PKTS_PER_BATCH);
@@ -864,54 +843,54 @@ static void ice_fill_tx_hw_ring(struct ice_tx_ring *xdp_ring, struct xdp_desc *d
ice_xmit_pkt_batch(xdp_ring, &descs[i], total_bytes);
for (; i < batched + leftover; i++)
ice_xmit_pkt(xdp_ring, &descs[i], total_bytes);
+}
- if (xdp_ring->next_to_use > xdp_ring->next_rs) {
- struct ice_tx_desc *tx_desc;
+/**
+ * ice_set_rs_bit - set RS bit on last produced descriptor (one behind current NTU)
+ * @xdp_ring: XDP ring to produce the HW Tx descriptors on
+ */
+static void ice_set_rs_bit(struct ice_tx_ring *xdp_ring)
+{
+ u16 ntu = xdp_ring->next_to_use ? xdp_ring->next_to_use - 1 : xdp_ring->count - 1;
+ struct ice_tx_desc *tx_desc;
- tx_desc = ICE_TX_DESC(xdp_ring, xdp_ring->next_rs);
- tx_desc->cmd_type_offset_bsz |=
- cpu_to_le64(ICE_TX_DESC_CMD_RS << ICE_TXD_QW1_CMD_S);
- xdp_ring->next_rs += tx_thresh;
- }
+ tx_desc = ICE_TX_DESC(xdp_ring, ntu);
+ tx_desc->cmd_type_offset_bsz |=
+ cpu_to_le64(ICE_TX_DESC_CMD_RS << ICE_TXD_QW1_CMD_S);
}
/**
* ice_xmit_zc - take entries from XSK Tx ring and place them onto HW Tx ring
* @xdp_ring: XDP ring to produce the HW Tx descriptors on
- * @budget: number of free descriptors on HW Tx ring that can be used
- * @napi_budget: amount of descriptors that NAPI allows us to clean
*
* Returns true if there is no more work that needs to be done, false otherwise
*/
-bool ice_xmit_zc(struct ice_tx_ring *xdp_ring, u32 budget, int napi_budget)
+bool ice_xmit_zc(struct ice_tx_ring *xdp_ring)
{
struct xdp_desc *descs = xdp_ring->xsk_pool->tx_descs;
- u16 tx_thresh = ICE_RING_QUARTER(xdp_ring);
u32 nb_pkts, nb_processed = 0;
unsigned int total_bytes = 0;
+ int budget;
+
+ ice_clean_xdp_irq_zc(xdp_ring);
- if (budget < tx_thresh)
- budget += ice_clean_xdp_irq_zc(xdp_ring, napi_budget);
+ budget = ICE_DESC_UNUSED(xdp_ring);
+ budget = min_t(u16, budget, ICE_RING_QUARTER(xdp_ring));
nb_pkts = xsk_tx_peek_release_desc_batch(xdp_ring->xsk_pool, budget);
if (!nb_pkts)
return true;
if (xdp_ring->next_to_use + nb_pkts >= xdp_ring->count) {
- struct ice_tx_desc *tx_desc;
-
nb_processed = xdp_ring->count - xdp_ring->next_to_use;
ice_fill_tx_hw_ring(xdp_ring, descs, nb_processed, &total_bytes);
- tx_desc = ICE_TX_DESC(xdp_ring, xdp_ring->next_rs);
- tx_desc->cmd_type_offset_bsz |=
- cpu_to_le64(ICE_TX_DESC_CMD_RS << ICE_TXD_QW1_CMD_S);
- xdp_ring->next_rs = tx_thresh - 1;
xdp_ring->next_to_use = 0;
}
ice_fill_tx_hw_ring(xdp_ring, &descs[nb_processed], nb_pkts - nb_processed,
&total_bytes);
+ ice_set_rs_bit(xdp_ring);
ice_xdp_ring_update_tail(xdp_ring);
ice_update_tx_ring_stats(xdp_ring, nb_pkts, total_bytes);
diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.h b/drivers/net/ethernet/intel/ice/ice_xsk.h
index 21faec8e97db..35dd3c57c4df 100644
--- a/drivers/net/ethernet/intel/ice/ice_xsk.h
+++ b/drivers/net/ethernet/intel/ice/ice_xsk.h
@@ -26,12 +26,9 @@ bool ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count);
bool ice_xsk_any_rx_ring_ena(struct ice_vsi *vsi);
void ice_xsk_clean_rx_ring(struct ice_rx_ring *rx_ring);
void ice_xsk_clean_xdp_ring(struct ice_tx_ring *xdp_ring);
-bool ice_xmit_zc(struct ice_tx_ring *xdp_ring, u32 budget, int napi_budget);
+bool ice_xmit_zc(struct ice_tx_ring *xdp_ring);
#else
-static inline bool
-ice_xmit_zc(struct ice_tx_ring __always_unused *xdp_ring,
- u32 __always_unused budget,
- int __always_unused napi_budget)
+ice_xmit_zc(struct ice_tx_ring __always_unused *xdp_ring)
{
return false;
}
--
2.34.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH intel-net 2/2] ice: xsk: drop power of 2 ring size restriction for AF_XDP
2022-08-30 12:38 [PATCH intel-net 0/2] ice: xsk: ZC changes Maciej Fijalkowski
2022-08-30 12:38 ` [PATCH intel-net 1/2] ice: xsk: change batched Tx descriptor cleaning Maciej Fijalkowski
@ 2022-08-30 12:38 ` Maciej Fijalkowski
2022-09-27 2:53 ` [Intel-wired-lan] " Kuruvinakunnel, George
1 sibling, 1 reply; 5+ messages in thread
From: Maciej Fijalkowski @ 2022-08-30 12:38 UTC (permalink / raw)
To: intel-wired-lan
Cc: netdev, bpf, anthony.l.nguyen, magnus.karlsson,
Maciej Fijalkowski
We had multiple customers in the past months that reported commit
296f13ff3854 ("ice: xsk: Force rings to be sized to power of 2")
makes them unable to use ring size of 8160 in conjunction with AF_XDP.
Remove this restriction.
Fixes: 296f13ff3854 ("ice: xsk: Force rings to be sized to power of 2")
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
---
drivers/net/ethernet/intel/ice/ice_xsk.c | 22 ++++++++--------------
1 file changed, 8 insertions(+), 14 deletions(-)
diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c
index 46efe72d1342..d51e45ea4499 100644
--- a/drivers/net/ethernet/intel/ice/ice_xsk.c
+++ b/drivers/net/ethernet/intel/ice/ice_xsk.c
@@ -329,13 +329,6 @@ int ice_xsk_pool_setup(struct ice_vsi *vsi, struct xsk_buff_pool *pool, u16 qid)
bool if_running, pool_present = !!pool;
int ret = 0, pool_failure = 0;
- if (!is_power_of_2(vsi->rx_rings[qid]->count) ||
- !is_power_of_2(vsi->tx_rings[qid]->count)) {
- netdev_err(vsi->netdev, "Please align ring sizes to power of 2\n");
- pool_failure = -EINVAL;
- goto failure;
- }
-
if_running = netif_running(vsi->netdev) && ice_is_xdp_ena_vsi(vsi);
if (if_running) {
@@ -358,7 +351,6 @@ int ice_xsk_pool_setup(struct ice_vsi *vsi, struct xsk_buff_pool *pool, u16 qid)
netdev_err(vsi->netdev, "ice_qp_ena error = %d\n", ret);
}
-failure:
if (pool_failure) {
netdev_err(vsi->netdev, "Could not %sable buffer pool, error = %d\n",
pool_present ? "en" : "dis", pool_failure);
@@ -465,11 +457,10 @@ static bool __ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count)
bool ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count)
{
u16 rx_thresh = ICE_RING_QUARTER(rx_ring);
- u16 batched, leftover, i, tail_bumps;
+ u16 leftover, i, tail_bumps;
- batched = ALIGN_DOWN(count, rx_thresh);
- tail_bumps = batched / rx_thresh;
- leftover = count & (rx_thresh - 1);
+ tail_bumps = count / rx_thresh;
+ leftover = count - (tail_bumps * rx_thresh);
for (i = 0; i < tail_bumps; i++)
if (!__ice_alloc_rx_bufs_zc(rx_ring, rx_thresh))
@@ -968,14 +959,17 @@ bool ice_xsk_any_rx_ring_ena(struct ice_vsi *vsi)
*/
void ice_xsk_clean_rx_ring(struct ice_rx_ring *rx_ring)
{
- u16 count_mask = rx_ring->count - 1;
u16 ntc = rx_ring->next_to_clean;
u16 ntu = rx_ring->next_to_use;
- for ( ; ntc != ntu; ntc = (ntc + 1) & count_mask) {
+ while (ntc != ntu) {
struct xdp_buff *xdp = *ice_xdp_buf(rx_ring, ntc);
xsk_buff_free(xdp);
+
+ ntc++;
+ if (ntc >= rx_ring->count)
+ ntc = 0;
}
}
--
2.34.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH intel-net 1/2] ice: xsk: change batched Tx descriptor cleaning
2022-08-30 12:38 ` [PATCH intel-net 1/2] ice: xsk: change batched Tx descriptor cleaning Maciej Fijalkowski
@ 2022-08-30 12:43 ` Maciej Fijalkowski
0 siblings, 0 replies; 5+ messages in thread
From: Maciej Fijalkowski @ 2022-08-30 12:43 UTC (permalink / raw)
To: intel-wired-lan; +Cc: netdev, bpf, anthony.l.nguyen, magnus.karlsson
On Tue, Aug 30, 2022 at 02:38:02PM +0200, Maciej Fijalkowski wrote:
> AF_XDP Tx descriptor cleaning in ice driver currently works in a "lazy"
> way - descriptors are not cleaned immediately after send. We rather hold
> on with cleaning until we see that free space in ring drops below
> particular threshold. This was supposed to reduce the amount of
> unnecessary work related to cleaning and instead of keeping the ring
> empty, ring was rather saturated.
>
> In AF_XDP realm cleaning Tx descriptors implies producing them to CQ.
> This is a way of letting know user space that particular descriptor has
> been sent, as John points out in [0].
>
> We tried to implement serial descriptor cleaning which would be used in
> conjunction with batched cleaning but it made code base more convoluted
> and probably harder to maintain in future. Therefore we step away from
> batched cleaning in a current form in favor of an approach where we set
> RS bit on every last descriptor from a batch and clean always at the
> beginning of ice_xmit_zc().
>
> This means that we give up a bit of Tx performance, but this doesn't
> hurt l2fwd scenario due to the fact that Tx side is much faster than Rx
> and Rx is the one that has to catch Tx up. txonly can be treaten as
> AF_XDP based packet generator.
>
> FWIW Tx descriptors are still produced in a batched way.
>
> Fixes: 126cdfe1007a ("ice: xsk: Improve AF_XDP ZC Tx and use batching API")
> [0]: https://lore.kernel.org/bpf/62b0a20232920_3573208ab@john.notmuch/
>
> Fixes: 126cdfe1007a ("ice: xsk: Improve AF_XDP ZC Tx and use batching API")
Ugh too much of a fixes tag. I'll send a v2.
> Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
> ---
> drivers/net/ethernet/intel/ice/ice_txrx.c | 2 +-
> drivers/net/ethernet/intel/ice/ice_xsk.c | 143 +++++++++-------------
> drivers/net/ethernet/intel/ice/ice_xsk.h | 7 +-
> 3 files changed, 64 insertions(+), 88 deletions(-)
>
^ permalink raw reply [flat|nested] 5+ messages in thread
* RE: [Intel-wired-lan] [PATCH intel-net 2/2] ice: xsk: drop power of 2 ring size restriction for AF_XDP
2022-08-30 12:38 ` [PATCH intel-net 2/2] ice: xsk: drop power of 2 ring size restriction for AF_XDP Maciej Fijalkowski
@ 2022-09-27 2:53 ` Kuruvinakunnel, George
0 siblings, 0 replies; 5+ messages in thread
From: Kuruvinakunnel, George @ 2022-09-27 2:53 UTC (permalink / raw)
To: Fijalkowski, Maciej, intel-wired-lan@lists.osuosl.org
Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, Karlsson, Magnus
> From: Intel-wired-lan <intel-wired-lan-bounces@osuosl.org> On Behalf Of Maciej
> Fijalkowski
> Sent: Tuesday, August 30, 2022 6:08 PM
> To: intel-wired-lan@lists.osuosl.org
> Cc: netdev@vger.kernel.org; bpf@vger.kernel.org; Karlsson, Magnus
> <magnus.karlsson@intel.com>
> Subject: [Intel-wired-lan] [PATCH intel-net 2/2] ice: xsk: drop power of 2 ring size
> restriction for AF_XDP
>
> We had multiple customers in the past months that reported commit
> 296f13ff3854 ("ice: xsk: Force rings to be sized to power of 2") makes them
> unable to use ring size of 8160 in conjunction with AF_XDP.
> Remove this restriction.
>
> Fixes: 296f13ff3854 ("ice: xsk: Force rings to be sized to power of 2")
> Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
> ---
> drivers/net/ethernet/intel/ice/ice_xsk.c | 22 ++++++++--------------
> 1 file changed, 8 insertions(+), 14 deletions(-)
>
Tested-by: George Kuruvinakunnel <george.kuruvinakunnel@intel.com>
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2022-09-27 2:58 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-08-30 12:38 [PATCH intel-net 0/2] ice: xsk: ZC changes Maciej Fijalkowski
2022-08-30 12:38 ` [PATCH intel-net 1/2] ice: xsk: change batched Tx descriptor cleaning Maciej Fijalkowski
2022-08-30 12:43 ` Maciej Fijalkowski
2022-08-30 12:38 ` [PATCH intel-net 2/2] ice: xsk: drop power of 2 ring size restriction for AF_XDP Maciej Fijalkowski
2022-09-27 2:53 ` [Intel-wired-lan] " Kuruvinakunnel, George
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox