* [PATCH iwl-net] ice: xsk: fix Rx allocation on non-coherent systems
@ 2024-09-03 18:05 Maciej Fijalkowski
2024-09-05 13:56 ` Maciej Fijalkowski
0 siblings, 1 reply; 2+ messages in thread
From: Maciej Fijalkowski @ 2024-09-03 18:05 UTC (permalink / raw)
To: intel-wired-lan
Cc: netdev, anthony.l.nguyen, magnus.karlsson, bjorn,
Maciej Fijalkowski, Dries De Winter
In cases when synchronizing DMA operations is necessary,
xsk_buff_alloc_batch() returns a single buffer instead of the requested
count. Detect such situation when filling HW Rx ring in ZC driver and
use xsk_buff_alloc() in a loop manner so that ring gets the buffers to
be used.
Reported-and-tested-by: Dries De Winter <ddewinter@synamedia.com>
Fixes: db804cfc21e9 ("ice: Use the xsk batched rx allocation interface")
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
---
drivers/net/ethernet/intel/ice/ice_xsk.c | 24 ++++++++++++++++++++++++
1 file changed, 24 insertions(+)
diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c
index 240a7bec242b..889d0a5070d7 100644
--- a/drivers/net/ethernet/intel/ice/ice_xsk.c
+++ b/drivers/net/ethernet/intel/ice/ice_xsk.c
@@ -449,7 +449,24 @@ static u16 ice_fill_rx_descs(struct xsk_buff_pool *pool, struct xdp_buff **xdp,
u16 buffs;
int i;
+ if (unlikely(!xsk_buff_can_alloc(pool, count)))
+ return 0;
+
buffs = xsk_buff_alloc_batch(pool, xdp, count);
+ /* fill the remainder part that batch API did not provide for us,
+ * this is usually the case for non-coherent systems that require DMA
+ * syncs
+ */
+ for (; buffs < count; buffs++) {
+ struct xdp_buff *tmp;
+
+ tmp = xsk_buff_alloc(pool);
+ if (unlikely(!tmp))
+ goto free;
+
+ xdp[buffs] = tmp;
+ }
+
for (i = 0; i < buffs; i++) {
dma = xsk_buff_xdp_get_dma(*xdp);
rx_desc->read.pkt_addr = cpu_to_le64(dma);
@@ -465,6 +482,13 @@ static u16 ice_fill_rx_descs(struct xsk_buff_pool *pool, struct xdp_buff **xdp,
}
return buffs;
+
+free:
+ for (i = 0; i < buffs; i++) {
+ xsk_buff_free(*xdp);
+ xdp++;
+ }
+ return 0;
}
/**
--
2.34.1
^ permalink raw reply related [flat|nested] 2+ messages in thread
* Re: [PATCH iwl-net] ice: xsk: fix Rx allocation on non-coherent systems
2024-09-03 18:05 [PATCH iwl-net] ice: xsk: fix Rx allocation on non-coherent systems Maciej Fijalkowski
@ 2024-09-05 13:56 ` Maciej Fijalkowski
0 siblings, 0 replies; 2+ messages in thread
From: Maciej Fijalkowski @ 2024-09-05 13:56 UTC (permalink / raw)
To: intel-wired-lan
Cc: netdev, anthony.l.nguyen, magnus.karlsson, bjorn, Dries De Winter
On Tue, Sep 03, 2024 at 08:05:11PM +0200, Maciej Fijalkowski wrote:
> In cases when synchronizing DMA operations is necessary,
> xsk_buff_alloc_batch() returns a single buffer instead of the requested
> count. Detect such situation when filling HW Rx ring in ZC driver and
> use xsk_buff_alloc() in a loop manner so that ring gets the buffers to
> be used.
Instead of addressing this on every driver side, let us do this in core
by looping over xp_alloc().
Please drop this patch I will follow-up with a fix to core instead.
Dries also found an issue that if xp_alloc_batch() is called with max == 0
it still returns a single buffer for dma_need_sync which we will fix as
well.
>
> Reported-and-tested-by: Dries De Winter <ddewinter@synamedia.com>
> Fixes: db804cfc21e9 ("ice: Use the xsk batched rx allocation interface")
> Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
> ---
> drivers/net/ethernet/intel/ice/ice_xsk.c | 24 ++++++++++++++++++++++++
> 1 file changed, 24 insertions(+)
>
> diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c
> index 240a7bec242b..889d0a5070d7 100644
> --- a/drivers/net/ethernet/intel/ice/ice_xsk.c
> +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c
> @@ -449,7 +449,24 @@ static u16 ice_fill_rx_descs(struct xsk_buff_pool *pool, struct xdp_buff **xdp,
> u16 buffs;
> int i;
>
> + if (unlikely(!xsk_buff_can_alloc(pool, count)))
> + return 0;
> +
> buffs = xsk_buff_alloc_batch(pool, xdp, count);
> + /* fill the remainder part that batch API did not provide for us,
> + * this is usually the case for non-coherent systems that require DMA
> + * syncs
> + */
> + for (; buffs < count; buffs++) {
> + struct xdp_buff *tmp;
> +
> + tmp = xsk_buff_alloc(pool);
> + if (unlikely(!tmp))
> + goto free;
> +
> + xdp[buffs] = tmp;
> + }
> +
> for (i = 0; i < buffs; i++) {
> dma = xsk_buff_xdp_get_dma(*xdp);
> rx_desc->read.pkt_addr = cpu_to_le64(dma);
> @@ -465,6 +482,13 @@ static u16 ice_fill_rx_descs(struct xsk_buff_pool *pool, struct xdp_buff **xdp,
> }
>
> return buffs;
> +
> +free:
> + for (i = 0; i < buffs; i++) {
> + xsk_buff_free(*xdp);
> + xdp++;
> + }
> + return 0;
> }
>
> /**
> --
> 2.34.1
>
>
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2024-09-05 13:57 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-09-03 18:05 [PATCH iwl-net] ice: xsk: fix Rx allocation on non-coherent systems Maciej Fijalkowski
2024-09-05 13:56 ` Maciej Fijalkowski
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).