From: Simon Horman <horms@kernel.org>
To: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Cc: intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org,
anthony.l.nguyen@intel.com, magnus.karlsson@intel.com,
jacob.e.keller@intel.com, xudu@redhat.com, mschmidt@redhat.com,
jmaxwell@redhat.com, poros@redhat.com,
przemyslaw.kitszel@intel.com
Subject: Re: [PATCH v4 iwl-net 2/3] ice: gather page_count()'s of each frag right before XDP prog call
Date: Thu, 23 Jan 2025 10:43:25 +0000 [thread overview]
Message-ID: <20250123104325.GK395043@kernel.org> (raw)
In-Reply-To: <20250122151046.574061-3-maciej.fijalkowski@intel.com>
On Wed, Jan 22, 2025 at 04:10:45PM +0100, Maciej Fijalkowski wrote:
> If we store the pgcnt on few fragments while being in the middle of
> gathering the whole frame and we stumbled upon DD bit not being set, we
> terminate the NAPI Rx processing loop and come back later on. Then on
> next NAPI execution we work on previously stored pgcnt.
>
> Imagine that second half of page was used actively by networking stack
> and by the time we came back, stack is not busy with this page anymore
> and decremented the refcnt. The page reuse algorithm in this case should
> be good to reuse the page but given the old refcnt it will not do so and
> attempt to release the page via page_frag_cache_drain() with
> pagecnt_bias used as an arg. This in turn will result in negative refcnt
> on struct page, which was initially observed by Xu Du.
>
> Therefore, move the page count storage from ice_get_rx_buf() to a place
> where we are sure that whole frame has been collected, but before
> calling XDP program as it internally can also change the page count of
> fragments belonging to xdp_buff.
>
> Fixes: ac0753391195 ("ice: Store page count inside ice_rx_buf")
> Reported-and-tested-by: Xu Du <xudu@redhat.com>
> Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
> Co-developed-by: Jacob Keller <jacob.e.keller@intel.com>
> Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
> Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Reviewed-by: Simon Horman <horms@kernel.org>
next prev parent reply other threads:[~2025-01-23 10:43 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-01-22 15:10 [PATCH v4 iwl-net 0/3] ice: fix Rx data path for heavy 9k MTU traffic Maciej Fijalkowski
2025-01-22 15:10 ` [PATCH v4 iwl-net 1/3] ice: put Rx buffers after being done with current frame Maciej Fijalkowski
2025-01-23 10:43 ` Simon Horman
2025-01-22 15:10 ` [PATCH v4 iwl-net 2/3] ice: gather page_count()'s of each frag right before XDP prog call Maciej Fijalkowski
2025-01-23 10:43 ` Simon Horman [this message]
2025-01-22 15:10 ` [PATCH v4 iwl-net 3/3] ice: stop storing XDP verdict within ice_rx_buf Maciej Fijalkowski
2025-01-22 19:42 ` [Intel-wired-lan] " kernel test robot
2025-01-23 10:45 ` Simon Horman
2025-01-23 10:51 ` Maciej Fijalkowski
2025-01-24 7:49 ` Przemek Kitszel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250123104325.GK395043@kernel.org \
--to=horms@kernel.org \
--cc=anthony.l.nguyen@intel.com \
--cc=intel-wired-lan@lists.osuosl.org \
--cc=jacob.e.keller@intel.com \
--cc=jmaxwell@redhat.com \
--cc=maciej.fijalkowski@intel.com \
--cc=magnus.karlsson@intel.com \
--cc=mschmidt@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=poros@redhat.com \
--cc=przemyslaw.kitszel@intel.com \
--cc=xudu@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).