netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
To: <intel-wired-lan@lists.osuosl.org>
Cc: <netdev@vger.kernel.org>, <anthony.l.nguyen@intel.com>,
	<magnus.karlsson@intel.com>, <bjorn@kernel.org>,
	Dries De Winter <ddewinter@synamedia.com>
Subject: Re: [PATCH iwl-net] ice: xsk: fix Rx allocation on non-coherent systems
Date: Thu, 5 Sep 2024 15:56:43 +0200	[thread overview]
Message-ID: <Ztm4m9gnWRBhWGqE@boxer> (raw)
In-Reply-To: <20240903180511.244041-1-maciej.fijalkowski@intel.com>

On Tue, Sep 03, 2024 at 08:05:11PM +0200, Maciej Fijalkowski wrote:
> In cases when synchronizing DMA operations is necessary,
> xsk_buff_alloc_batch() returns a single buffer instead of the requested
> count. Detect such situation when filling HW Rx ring in ZC driver and
> use xsk_buff_alloc() in a loop manner so that ring gets the buffers to
> be used.

Instead of addressing this on every driver side, let us do this in core
by looping over xp_alloc().

Please drop this patch I will follow-up with a fix to core instead.

Dries also found an issue that if xp_alloc_batch() is called with max == 0
it still returns a single buffer for dma_need_sync which we will fix as
well.

> 
> Reported-and-tested-by: Dries De Winter <ddewinter@synamedia.com>
> Fixes: db804cfc21e9 ("ice: Use the xsk batched rx allocation interface")
> Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
> ---
>  drivers/net/ethernet/intel/ice/ice_xsk.c | 24 ++++++++++++++++++++++++
>  1 file changed, 24 insertions(+)
> 
> diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c
> index 240a7bec242b..889d0a5070d7 100644
> --- a/drivers/net/ethernet/intel/ice/ice_xsk.c
> +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c
> @@ -449,7 +449,24 @@ static u16 ice_fill_rx_descs(struct xsk_buff_pool *pool, struct xdp_buff **xdp,
>  	u16 buffs;
>  	int i;
>  
> +	if (unlikely(!xsk_buff_can_alloc(pool, count)))
> +		return 0;
> +
>  	buffs = xsk_buff_alloc_batch(pool, xdp, count);
> +	/* fill the remainder part that batch API did not provide for us,
> +	 * this is usually the case for non-coherent systems that require DMA
> +	 * syncs
> +	 */
> +	for (; buffs < count; buffs++) {
> +		struct xdp_buff *tmp;
> +
> +		tmp = xsk_buff_alloc(pool);
> +		if (unlikely(!tmp))
> +			goto free;
> +
> +		xdp[buffs] = tmp;
> +	}
> +
>  	for (i = 0; i < buffs; i++) {
>  		dma = xsk_buff_xdp_get_dma(*xdp);
>  		rx_desc->read.pkt_addr = cpu_to_le64(dma);
> @@ -465,6 +482,13 @@ static u16 ice_fill_rx_descs(struct xsk_buff_pool *pool, struct xdp_buff **xdp,
>  	}
>  
>  	return buffs;
> +
> +free:
> +	for (i = 0; i < buffs; i++) {
> +		xsk_buff_free(*xdp);
> +		xdp++;
> +	}
> +	return 0;
>  }
>  
>  /**
> -- 
> 2.34.1
> 
> 

      reply	other threads:[~2024-09-05 13:57 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-09-03 18:05 [PATCH iwl-net] ice: xsk: fix Rx allocation on non-coherent systems Maciej Fijalkowski
2024-09-05 13:56 ` Maciej Fijalkowski [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Ztm4m9gnWRBhWGqE@boxer \
    --to=maciej.fijalkowski@intel.com \
    --cc=anthony.l.nguyen@intel.com \
    --cc=bjorn@kernel.org \
    --cc=ddewinter@synamedia.com \
    --cc=intel-wired-lan@lists.osuosl.org \
    --cc=magnus.karlsson@intel.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).