From: Jakub Kicinski <kuba@kernel.org>
To: davem@davemloft.net
Cc: tariqt@nvidia.com, idosch@idosch.org, hawk@kernel.org,
netdev@vger.kernel.org, edumazet@google.com, pabeni@redhat.com,
andrew+netdev@lunn.ch, horms@kernel.org,
Jakub Kicinski <kuba@kernel.org>
Subject: [PATCH net-next v2 0/4] eth: mlx4: use the page pool for Rx buffers
Date: Tue, 11 Feb 2025 11:21:37 -0800 [thread overview]
Message-ID: <20250211192141.619024-1-kuba@kernel.org> (raw)
Convert mlx4 to page pool. I've been sitting on these patches for
over a year, and Jonathan Lemon had a similar series years before.
We never deployed it or sent upstream because it didn't really show
much perf win under normal load (admittedly I think the real testing
was done before Ilias's work on recycling).
During the v6.9 kernel rollout Meta's CDN team noticed that machines
with CX3 Pro (mlx4) are prone to overloads (double digit % of CPU time
spent mapping buffers in the IOMMU). The problem does not occur with
modern NICs, so I dusted off this series and reportedly it still works.
And it makes the problem go away, no overloads, perf back in line with
older kernels. Something must have changed in IOMMU code, I guess.
This series is very simple, and can very likely be optimized further.
Thing is, I don't have access to any CX3 Pro NICs. They only exist
in CDN locations which haven't had a HW refresh for a while. So I can
say this series survives a week under traffic w/ XDP enabled, but
my ability to iterate and improve is a bit limited.
v2:
- remove unnecessary .max_size (Nit by Ido.)
- change pool size
- fix xdp xmit support description.
v1: https://lore.kernel.org/20250205031213.358973-1-kuba@kernel.org
Jakub Kicinski (4):
eth: mlx4: create a page pool for Rx
eth: mlx4: don't try to complete XDP frames in netpoll
eth: mlx4: remove the local XDP fast-recycling ring
eth: mlx4: use the page pool for Rx buffers
drivers/net/ethernet/mellanox/mlx4/mlx4_en.h | 15 +--
drivers/net/ethernet/mellanox/mlx4/en_rx.c | 120 +++++++------------
drivers/net/ethernet/mellanox/mlx4/en_tx.c | 17 ++-
3 files changed, 53 insertions(+), 99 deletions(-)
--
2.48.1
next reply other threads:[~2025-02-11 19:21 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-02-11 19:21 Jakub Kicinski [this message]
2025-02-11 19:21 ` [PATCH net-next v2 1/4] eth: mlx4: create a page pool for Rx Jakub Kicinski
2025-02-11 19:21 ` [PATCH net-next v2 2/4] eth: mlx4: don't try to complete XDP frames in netpoll Jakub Kicinski
2025-02-11 19:21 ` [PATCH net-next v2 3/4] eth: mlx4: remove the local XDP fast-recycling ring Jakub Kicinski
2025-02-11 19:21 ` [PATCH net-next v2 4/4] eth: mlx4: use the page pool for Rx buffers Jakub Kicinski
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250211192141.619024-1-kuba@kernel.org \
--to=kuba@kernel.org \
--cc=andrew+netdev@lunn.ch \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=hawk@kernel.org \
--cc=horms@kernel.org \
--cc=idosch@idosch.org \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=tariqt@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).