From: Tariq Toukan <tariqt@nvidia.com>
To: "David S. Miller" <davem@davemloft.net>,
Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
Eric Dumazet <edumazet@google.com>,
"Andrew Lunn" <andrew+netdev@lunn.ch>
Cc: Saeed Mahameed <saeedm@nvidia.com>,
Leon Romanovsky <leon@kernel.org>,
Tariq Toukan <tariqt@nvidia.com>,
Richard Cochran <richardcochran@gmail.com>,
Alexei Starovoitov <ast@kernel.org>,
Daniel Borkmann <daniel@iogearbox.net>,
Jesper Dangaard Brouer <hawk@kernel.org>,
John Fastabend <john.fastabend@gmail.com>,
<netdev@vger.kernel.org>, <linux-rdma@vger.kernel.org>,
<linux-kernel@vger.kernel.org>, <bpf@vger.kernel.org>,
Moshe Shemesh <moshe@nvidia.com>, Mark Bloch <mbloch@nvidia.com>,
Gal Pressman <gal@nvidia.com>, Cosmin Ratiu <cratiu@nvidia.com>,
Dragos Tatulea <dtatulea@nvidia.com>
Subject: [PATCH net-next V2 00/11] net/mlx5e: Add support for devmem and io_uring TCP zero-copy
Date: Fri, 23 May 2025 00:41:15 +0300 [thread overview]
Message-ID: <1747950086-1246773-1-git-send-email-tariqt@nvidia.com> (raw)
This series from the team adds support for zerocopy rx TCP with devmem
and io_uring for ConnectX7 NICs and above. For performance reasons and
simplicity HW-GRO will also be turned on when header-data split mode is
on.
Find more details below.
Regards,
Tariq
Performance
===========
Test setup:
* CPU: Intel(R) Xeon(R) Platinum 8380 CPU @ 2.30GHz (single NUMA)
* NIC: ConnectX7
* Benchmarking tool: kperf [1]
* Single TCP flow
* Test duration: 60s
With application thread and interrupts pinned to the *same* core:
|------+-----------+----------|
| MTU | epoll | io_uring |
|------+-----------+----------|
| 1500 | 61.6 Gbps | 114 Gbps |
| 4096 | 69.3 Gbps | 151 Gbps |
| 9000 | 67.8 Gbps | 187 Gbps |
|------+-----------+----------|
The CPU usage for io_uring is 95%.
Reproduction steps for io_uring:
server --no-daemon -a 2001:db8::1 --no-memcmp --iou --iou_sendzc \
--iou_zcrx --iou_dev_name eth2 --iou_zcrx_queue_id 2
server --no-daemon -a 2001:db8::2 --no-memcmp --iou --iou_sendzc
client --src 2001:db8::2 --dst 2001:db8::1 \
--msg-zerocopy -t 60 --cpu-min=2 --cpu-max=2
Patch overview:
================
First, a netmem API for skb_can_coalesce is added to the core to be able
to do skb fragment coalescing on netmems.
The next patches introduce some cleanups in the internal SHAMPO code and
improvements to hw gro capability checks in FW.
A separate page_pool is introduced for headers. Ethtool stats are added
as well.
Then the driver is converted to use the netmem API and to allow support
for unreadable netmem page pool.
The queue management ops are implemented.
Finally, the tcp-data-split ring parameter is exposed.
Changelog
=========
Changes from v1 [0]:
- Added support for skb_can_coalesce_netmem().
- Avoid netmem_to_page() casts in the driver.
- Fixed code to abide 80 char limit with some exceptions to avoid
code churn.
References
==========
[0] v1: https://lore.kernel.org/all/20250116215530.158886-1-saeed@kernel.org/
[1] kperf: git://git.kernel.dk/kperf.git
Dragos Tatulea (1):
net: Add skb_can_coalesce for netmem
Saeed Mahameed (10):
net: Kconfig NET_DEVMEM selects GENERIC_ALLOCATOR
net/mlx5e: SHAMPO: Reorganize mlx5_rq_shampo_alloc
net/mlx5e: SHAMPO: Remove redundant params
net/mlx5e: SHAMPO: Improve hw gro capability checking
net/mlx5e: SHAMPO: Separate pool for headers
net/mlx5e: SHAMPO: Headers page pool stats
net/mlx5e: Convert over to netmem
net/mlx5e: Add support for UNREADABLE netmem page pools
net/mlx5e: Implement queue mgmt ops and single channel swap
net/mlx5e: Support ethtool tcp-data-split settings
drivers/net/ethernet/mellanox/mlx5/core/en.h | 11 +-
.../ethernet/mellanox/mlx5/core/en/params.c | 36 ++-
.../ethernet/mellanox/mlx5/core/en_ethtool.c | 50 ++++
.../net/ethernet/mellanox/mlx5/core/en_main.c | 281 +++++++++++++-----
.../net/ethernet/mellanox/mlx5/core/en_rx.c | 136 +++++----
.../ethernet/mellanox/mlx5/core/en_stats.c | 53 ++++
.../ethernet/mellanox/mlx5/core/en_stats.h | 24 ++
include/linux/skbuff.h | 12 +
net/Kconfig | 2 +-
9 files changed, 445 insertions(+), 160 deletions(-)
base-commit: 33e1b1b3991ba8c0d02b2324a582e084272205d6
--
2.31.1
next reply other threads:[~2025-05-22 21:42 UTC|newest]
Thread overview: 50+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-05-22 21:41 Tariq Toukan [this message]
2025-05-22 21:41 ` [PATCH net-next V2 01/11] net: Kconfig NET_DEVMEM selects GENERIC_ALLOCATOR Tariq Toukan
2025-05-22 23:07 ` Mina Almasry
2025-05-22 21:41 ` [PATCH net-next V2 02/11] net: Add skb_can_coalesce for netmem Tariq Toukan
2025-05-22 23:09 ` Mina Almasry
2025-05-25 13:03 ` Dragos Tatulea
2025-05-25 17:44 ` Mina Almasry
2025-05-28 9:13 ` Dragos Tatulea
2025-05-22 21:41 ` [PATCH net-next V2 03/11] net/mlx5e: SHAMPO: Reorganize mlx5_rq_shampo_alloc Tariq Toukan
2025-05-22 21:41 ` [PATCH net-next V2 04/11] net/mlx5e: SHAMPO: Remove redundant params Tariq Toukan
2025-05-22 21:41 ` [PATCH net-next V2 05/11] net/mlx5e: SHAMPO: Improve hw gro capability checking Tariq Toukan
2025-05-22 21:41 ` [PATCH net-next V2 06/11] net/mlx5e: SHAMPO: Separate pool for headers Tariq Toukan
2025-05-22 22:30 ` Jakub Kicinski
2025-05-22 23:08 ` Saeed Mahameed
2025-05-22 23:24 ` Mina Almasry
2025-05-22 23:43 ` Saeed Mahameed
2025-05-27 15:29 ` Jakub Kicinski
2025-05-27 15:53 ` Dragos Tatulea
2025-05-22 21:41 ` [PATCH net-next V2 07/11] net/mlx5e: SHAMPO: Headers page pool stats Tariq Toukan
2025-05-22 22:31 ` Jakub Kicinski
2025-05-22 22:58 ` Saeed Mahameed
2025-06-06 10:43 ` Cosmin Ratiu
2025-06-08 10:09 ` Tariq Toukan
2025-06-09 15:20 ` Jakub Kicinski
2025-05-22 21:41 ` [PATCH net-next V2 08/11] net/mlx5e: Convert over to netmem Tariq Toukan
2025-05-22 23:18 ` Mina Almasry
2025-05-22 23:54 ` Saeed Mahameed
2025-05-23 17:58 ` Mina Almasry
2025-05-23 19:22 ` Saeed Mahameed
2025-05-22 21:41 ` [PATCH net-next V2 09/11] net/mlx5e: Add support for UNREADABLE netmem page pools Tariq Toukan
2025-05-22 23:26 ` Mina Almasry
2025-05-22 23:56 ` Saeed Mahameed
2025-05-22 21:41 ` [PATCH net-next V2 10/11] net/mlx5e: Implement queue mgmt ops and single channel swap Tariq Toukan
2025-05-22 21:41 ` [PATCH net-next V2 11/11] net/mlx5e: Support ethtool tcp-data-split settings Tariq Toukan
2025-05-22 22:55 ` Jakub Kicinski
2025-05-22 23:19 ` Saeed Mahameed
2025-05-23 16:17 ` Cosmin Ratiu
2025-05-23 19:35 ` saeed
2025-05-27 16:10 ` Jakub Kicinski
2025-05-28 5:10 ` Gal Pressman
2025-05-29 0:12 ` Jakub Kicinski
2025-05-27 16:05 ` [PATCH net-next V2 00/11] net/mlx5e: Add support for devmem and io_uring TCP zero-copy Stanislav Fomichev
2025-05-28 9:17 ` Dragos Tatulea
2025-05-28 15:45 ` Stanislav Fomichev
2025-05-28 22:59 ` Mina Almasry
2025-05-28 23:04 ` Stanislav Fomichev
2025-05-29 11:11 ` Dragos Tatulea
2025-06-06 9:00 ` Cosmin Ratiu
2025-05-28 0:31 ` Jakub Kicinski
2025-05-28 1:20 ` patchwork-bot+netdevbpf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1747950086-1246773-1-git-send-email-tariqt@nvidia.com \
--to=tariqt@nvidia.com \
--cc=andrew+netdev@lunn.ch \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=cratiu@nvidia.com \
--cc=daniel@iogearbox.net \
--cc=davem@davemloft.net \
--cc=dtatulea@nvidia.com \
--cc=edumazet@google.com \
--cc=gal@nvidia.com \
--cc=hawk@kernel.org \
--cc=john.fastabend@gmail.com \
--cc=kuba@kernel.org \
--cc=leon@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=mbloch@nvidia.com \
--cc=moshe@nvidia.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=richardcochran@gmail.com \
--cc=saeedm@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox