Intel-Wired-Lan Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
To: intel-wired-lan@osuosl.org
Subject: [Intel-wired-lan] [PATCH intel-net 3/5] ice: xsk: do not clear status_error0 for ntu + nb_buffs descriptor
Date: Mon, 13 Dec 2021 12:33:02 +0100	[thread overview]
Message-ID: <YbcvbjeMFLTMhYaO@boxer> (raw)
In-Reply-To: <20211213112634.37312-1-alexandr.lobakin@intel.com>

On Mon, Dec 13, 2021 at 12:26:34PM +0100, Alexander Lobakin wrote:
> From: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
> Date: Mon, 13 Dec 2021 12:12:13 +0100
> 
> > On Fri, Dec 10, 2021 at 11:37:46PM +0100, Alexander Lobakin wrote:
> > > From: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
> > > Date: Fri, 10 Dec 2021 15:59:39 +0100
> > > 
> > > > The descriptor that ntu is pointing at when we exit
> > > > ice_alloc_rx_bufs_zc() should not have its corresponding DD bit cleared
> > > > as descriptor is not allocated in there and it is not valid for HW
> > > > usage.
> > > > 
> > > > The allocation routine at the entry will fill the descriptor that ntu
> > > > points to after it was set to ntu + nb_buffs on previous call.
> > > > 
> > > > Even the spec says:
> > > > "The tail pointer should be set to one descriptor beyond the last empty
> > > > descriptor in host descriptor ring."
> > > > 
> > > > Therefore, step away from clearing the status_error0 on ntu + nb_buffs
> > > > descriptor.
> > > > 
> > > > Fixes: db804cfc21e9 ("ice: Use the xsk batched rx allocation interface")
> > > > Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
> > > > ---
> > > >  drivers/net/ethernet/intel/ice/ice_xsk.c | 7 +------
> > > >  1 file changed, 1 insertion(+), 6 deletions(-)
> > > > 
> > > > diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c
> > > > index 5cb61955c1f3..874fce9fa1c3 100644
> > > > --- a/drivers/net/ethernet/intel/ice/ice_xsk.c
> > > > +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c
> > > > @@ -394,14 +394,9 @@ bool ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count)
> > > >  	}
> > > >  
> > > >  	ntu += nb_buffs;
> > > > -	if (ntu == rx_ring->count) {
> > > > -		rx_desc = ICE_RX_DESC(rx_ring, 0);
> > > > -		xdp = rx_ring->xdp_buf;
> > > > +	if (ntu == rx_ring->count)
> > > 
> > > Maybe use unlikely() here while at it? 1/512 (depending on ring
> > > size) chance is low enough.
> > 
> > This would make sense to me if we would have this check inside some loop
> > going over the buffers that we received from xsk pool.
> > 
> > I tried such approach probably on Tx side and Magnus said that unlikely
> > will move this code to the cold section at the end of the text section.
> > 
> > > 
> > > >  		ntu = 0;
> > > > -	}
> > > >  
> > > > -	/* clear the status bits for the next_to_use descriptor */
> > > > -	rx_desc->wb.status_error0 = 0;
> > > >  	ice_release_rx_desc(rx_ring, ntu);
> > > 
> > > This interferes with my patch in next-queue ([0]) (well, supersedes
> > > it to be precise).
> > > Tony, what would be better to do with it, just drop mine or correct
> > > this one (it would become an oneliner removing status_error0
> > > assignment then)?
> > 
> > Oops, sorry. This set should go to net though, not net-next, but I can
> > base it on top of your patch.
> 
> Ah, it's for -net, my bad. Let's leave it as it is then, my series
> for -next has patches for both i40e and ice, so the latter will be
> just merged when netdev mains pull net to net-next.

But we remove the same line in these patches. Your patch has a fixes tag,
so what was the reason for directing it to -next? Maybe I can include your
patch onto my set?

> 
> > 
> > > 
> > > >  
> > > >  	return count == nb_buffs;
> > > > -- 
> > > > 2.33.1
> > > 
> > > [0] https://lore.kernel.org/netdev/20211130183649.1166842-2-alexandr.lobakin at intel.com
> > > 
> > > Al
> 
> Al

  reply	other threads:[~2021-12-13 11:33 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-12-10 14:59 [Intel-wired-lan] [PATCH intel-net 0/5] ice: xsk: Rx processing fixes Maciej Fijalkowski
2021-12-10 14:59 ` [Intel-wired-lan] [PATCH intel-net 1/5] ice: xsk: return xsk buffers back to pool when cleaning the ring Maciej Fijalkowski
2021-12-10 14:59 ` [Intel-wired-lan] [PATCH intel-net 2/5] ice: xsk: allocate separate memory for XDP SW ring Maciej Fijalkowski
2021-12-10 21:05   ` Nguyen, Anthony L
2021-12-10 22:24   ` Alexander Lobakin
2021-12-13 11:14     ` Maciej Fijalkowski
2021-12-10 14:59 ` [Intel-wired-lan] [PATCH intel-net 3/5] ice: xsk: do not clear status_error0 for ntu + nb_buffs descriptor Maciej Fijalkowski
2021-12-10 22:37   ` Alexander Lobakin
2021-12-13 11:12     ` Maciej Fijalkowski
2021-12-13 11:26       ` Alexander Lobakin
2021-12-13 11:33         ` Maciej Fijalkowski [this message]
2021-12-13 11:43           ` Alexander Lobakin
2021-12-13 12:53       ` Magnus Karlsson
2021-12-10 14:59 ` [Intel-wired-lan] [PATCH intel-net 4/5] ice: xsk: allow empty Rx descriptors on XSK ZC data path Maciej Fijalkowski
2021-12-10 14:59 ` [Intel-wired-lan] [PATCH intel-net 5/5] ice: xsk: fix cleaned_count setting Maciej Fijalkowski

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YbcvbjeMFLTMhYaO@boxer \
    --to=maciej.fijalkowski@intel.com \
    --cc=intel-wired-lan@osuosl.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox