From: Jakub Sitnicki <jakub@cloudflare.com>
To: Michal Kubiak <michal.kubiak@intel.com>
Cc: intel-wired-lan@lists.osuosl.org, maciej.fijalkowski@intel.com,
aleksander.lobakin@intel.com, jacob.e.keller@intel.com,
larysa.zaremba@intel.com, netdev@vger.kernel.org,
przemyslaw.kitszel@intel.com, pmenzel@molgen.mpg.de,
anthony.l.nguyen@intel.com
Subject: Re: [PATCH iwl-next v3 0/3] ice: convert Rx path to Page Pool
Date: Thu, 25 Sep 2025 11:56:31 +0200 [thread overview]
Message-ID: <877bxm4zzk.fsf@cloudflare.com> (raw)
In-Reply-To: <20250925092253.1306476-1-michal.kubiak@intel.com> (Michal Kubiak's message of "Thu, 25 Sep 2025 11:22:50 +0200")
On Thu, Sep 25, 2025 at 11:22 AM +02, Michal Kubiak wrote:
> This series modernizes the Rx path in the ice driver by removing legacy
> code and switching to the Page Pool API. The changes follow the same
> direction as previously done for the iavf driver, and aim to simplify
> buffer management, improve maintainability, and prepare for future
> infrastructure reuse.
>
> An important motivation for this work was addressing reports of poor
> performance in XDP_TX mode when IOMMU is enabled. The legacy Rx model
> incurred significant overhead due to per-frame DMA mapping, which
> limited throughput in virtualized environments. This series eliminates
> those bottlenecks by adopting Page Pool and bi-directional DMA mapping.
>
> The first patch removes the legacy Rx path, which relied on manual skb
> allocation and header copying. This path has become obsolete due to the
> availability of build_skb() and the increasing complexity of supporting
> features like XDP and multi-buffer.
>
> The second patch drops the page splitting and recycling logic. While
> once used to optimize memory usage, this logic introduced significant
> complexity and hotpath overhead. Removing it simplifies the Rx flow and
> sets the stage for Page Pool adoption.
>
> The final patch switches the driver to use the Page Pool and libeth
> APIs. It also updates the XDP implementation to use libeth_xdp helpers
> and optimizes XDP_TX by avoiding per-frame DMA mapping. This results in
> a significant performance improvement in virtualized environments with
> IOMMU enabled (over 5x gain in XDP_TX throughput). In other scenarios,
> performance remains on par with the previous implementation.
>
> This conversion also aligns with the broader effort to modularize and
> unify XDP support across Intel Ethernet drivers.
>
> Tested on various workloads including netperf and XDP modes (PASS, DROP,
> TX) with and without IOMMU. No regressions observed.
Will we be able to have 256 B of XDP headroom after this conversion?
Thanks,
-jkbs
next prev parent reply other threads:[~2025-09-25 9:56 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-25 9:22 [PATCH iwl-next v3 0/3] ice: convert Rx path to Page Pool Michal Kubiak
2025-09-25 9:22 ` [PATCH iwl-next v3 1/3] ice: remove legacy Rx and construct SKB Michal Kubiak
2025-10-22 23:53 ` [Intel-wired-lan] " Nowlin, Alexander
2025-09-25 9:22 ` [PATCH iwl-next v3 2/3] ice: drop page splitting and recycling Michal Kubiak
2025-10-22 23:56 ` [Intel-wired-lan] " Nowlin, Alexander
2025-09-25 9:22 ` [PATCH iwl-next v3 3/3] ice: switch to Page Pool Michal Kubiak
2025-10-22 23:58 ` [Intel-wired-lan] " Nowlin, Alexander
2025-10-23 0:03 ` Nowlin, Alexander
2025-09-25 9:56 ` Jakub Sitnicki [this message]
2025-09-25 17:22 ` [PATCH iwl-next v3 0/3] ice: convert Rx path " Jacob Keller
2025-09-26 9:40 ` Jakub Sitnicki
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=877bxm4zzk.fsf@cloudflare.com \
--to=jakub@cloudflare.com \
--cc=aleksander.lobakin@intel.com \
--cc=anthony.l.nguyen@intel.com \
--cc=intel-wired-lan@lists.osuosl.org \
--cc=jacob.e.keller@intel.com \
--cc=larysa.zaremba@intel.com \
--cc=maciej.fijalkowski@intel.com \
--cc=michal.kubiak@intel.com \
--cc=netdev@vger.kernel.org \
--cc=pmenzel@molgen.mpg.de \
--cc=przemyslaw.kitszel@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).