From: Simon Horman <horms@kernel.org>
To: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Cc: intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org,
anthony.l.nguyen@intel.com, magnus.karlsson@intel.com,
jacob.e.keller@intel.com, xudu@redhat.com, mschmidt@redhat.com,
jmaxwell@redhat.com, poros@redhat.com,
przemyslaw.kitszel@intel.com
Subject: Re: [PATCH v4 iwl-net 1/3] ice: put Rx buffers after being done with current frame
Date: Thu, 23 Jan 2025 10:43:03 +0000 [thread overview]
Message-ID: <20250123104303.GJ395043@kernel.org> (raw)
In-Reply-To: <20250122151046.574061-2-maciej.fijalkowski@intel.com>
On Wed, Jan 22, 2025 at 04:10:44PM +0100, Maciej Fijalkowski wrote:
> Introduce a new helper ice_put_rx_mbuf() that will go through gathered
> frags from current frame and will call ice_put_rx_buf() on them. Current
> logic that was supposed to simplify and optimize the driver where we go
> through a batch of all buffers processed in current NAPI instance turned
> out to be broken for jumbo frames and very heavy load that was coming
> from both multi-thread iperf and nginx/wrk pair between server and
> client. The delay introduced by approach that we are dropping is simply
> too big and we need to take the decision regarding page
> recycling/releasing as quick as we can.
>
> While at it, address an error path of ice_add_xdp_frag() - we were
> missing buffer putting from day 1 there.
>
> As a nice side effect we get rid of annoying and repetitive three-liner:
>
> xdp->data = NULL;
> rx_ring->first_desc = ntc;
> rx_ring->nr_frags = 0;
>
> by embedding it within introduced routine.
>
> Fixes: 1dc1a7e7f410 ("ice: Centrallize Rx buffer recycling")
> Reported-and-tested-by: Xu Du <xudu@redhat.com>
> Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
> Co-developed-by: Jacob Keller <jacob.e.keller@intel.com>
> Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
> Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Reviewed-by: Simon Horman <horms@kernel.org>
next prev parent reply other threads:[~2025-01-23 10:43 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-01-22 15:10 [PATCH v4 iwl-net 0/3] ice: fix Rx data path for heavy 9k MTU traffic Maciej Fijalkowski
2025-01-22 15:10 ` [PATCH v4 iwl-net 1/3] ice: put Rx buffers after being done with current frame Maciej Fijalkowski
2025-01-23 10:43 ` Simon Horman [this message]
2025-01-22 15:10 ` [PATCH v4 iwl-net 2/3] ice: gather page_count()'s of each frag right before XDP prog call Maciej Fijalkowski
2025-01-23 10:43 ` Simon Horman
2025-01-22 15:10 ` [PATCH v4 iwl-net 3/3] ice: stop storing XDP verdict within ice_rx_buf Maciej Fijalkowski
2025-01-22 19:42 ` [Intel-wired-lan] " kernel test robot
2025-01-23 10:45 ` Simon Horman
2025-01-23 10:51 ` Maciej Fijalkowski
2025-01-24 7:49 ` Przemek Kitszel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250123104303.GJ395043@kernel.org \
--to=horms@kernel.org \
--cc=anthony.l.nguyen@intel.com \
--cc=intel-wired-lan@lists.osuosl.org \
--cc=jacob.e.keller@intel.com \
--cc=jmaxwell@redhat.com \
--cc=maciej.fijalkowski@intel.com \
--cc=magnus.karlsson@intel.com \
--cc=mschmidt@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=poros@redhat.com \
--cc=przemyslaw.kitszel@intel.com \
--cc=xudu@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).