From: David Wei <dw@davidwei.uk>
To: io-uring@vger.kernel.org, netdev@vger.kernel.org
Cc: David Wei <dw@davidwei.uk>, Jens Axboe <axboe@kernel.dk>,
Pavel Begunkov <asml.silence@gmail.com>,
Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
"David S. Miller" <davem@davemloft.net>,
Eric Dumazet <edumazet@google.com>,
Jesper Dangaard Brouer <hawk@kernel.org>,
David Ahern <dsahern@kernel.org>,
Mina Almasry <almasrymina@google.com>
Subject: [PATCH v1 00/15] io_uring zero copy rx
Date: Mon, 7 Oct 2024 15:15:48 -0700 [thread overview]
Message-ID: <20241007221603.1703699-1-dw@davidwei.uk> (raw)
This patchset adds support for zero copy rx into userspace pages using
io_uring, eliminating a kernel to user copy.
We configure a page pool that a driver uses to fill a hw rx queue to
hand out user pages instead of kernel pages. Any data that ends up
hitting this hw rx queue will thus be dma'd into userspace memory
directly, without needing to be bounced through kernel memory. 'Reading'
data out of a socket instead becomes a _notification_ mechanism, where
the kernel tells userspace where the data is. The overall approach is
similar to the devmem TCP proposal.
This relies on hw header/data split, flow steering and RSS to ensure
packet headers remain in kernel memory and only desired flows hit a hw
rx queue configured for zero copy. Configuring this is outside of the
scope of this patchset.
We share netdev core infra with devmem TCP. The main difference is that
io_uring is used for the uAPI and the lifetime of all objects are bound
to an io_uring instance. Data is 'read' using a new io_uring request
type. When done, data is returned via a new shared refill queue. A zero
copy page pool refills a hw rx queue from this refill queue directly. Of
course, the lifetime of these data buffers are managed by io_uring
rather than the networking stack, with different refcounting rules.
This patchset is the first step adding basic zero copy support. We will
extend this iteratively with new features e.g. dynamically allocated
zero copy areas, THP support, dmabuf support, improved copy fallback,
general optimisations and more.
In terms of netdev support, we're first targeting Broadcom bnxt. Patches
aren't included since Taehee Yoo has already sent a more comprehensive
patchset adding support in [1]. Google gve should already support this,
and Mellanox mlx5 support is WIP pending driver changes.
===========
Performance
===========
Test setup:
* AMD EPYC 9454
* Broadcom BCM957508 200G
* Kernel v6.11 base [2]
* liburing fork [3]
* kperf fork [4]
* 4K MTU
* Single TCP flow
With application thread + net rx softirq pinned to _different_ cores:
epoll
82.2 Gbps
io_uring
116.2 Gbps (+41%)
Pinned to _same_ core:
epoll
62.6 Gbps
io_uring
80.9 Gbps (+29%)
==============
Patch overview
==============
Networking folks would be mostly interested in patches 1-8, 11 and 14.
Patches 1-2 clean up net_iov and devmem, then patches 3-8 make changes
to netdev to suit our needs.
Patch 11 implements struct memory_provider_ops, and Patch 14 passes it
all to netdev via the queue API.
io_uring folks would be mostly interested in patches 9-15:
* Initial registration that sets up a hw rx queue.
* Shared ringbuf for userspace to return buffers.
* New request type for doing zero copy rx reads.
=====
Links
=====
Broadcom bnxt support:
[1]: https://lore.kernel.org/netdev/20241003160620.1521626-8-ap420073@gmail.com/
Linux kernel branch:
[2]: https://github.com/isilence/linux.git zcrx/v5
liburing for testing:
[3]: https://github.com/spikeh/liburing/tree/zcrx/next
kperf for testing:
[4]: https://github.com/spikeh/kperf/tree/zcrx/next
Changes in v1:
--------------
* Rebase on top of merged net_iov + netmem infra.
* Decouple net_iov from devmem TCP.
* Use netdev queue API to allocate an rx queue.
* Minor uAPI enhancements for future extensibility.
* QoS improvements with request throttling.
Changes in RFC v4:
------------------
* Rebased on top of Mina Almasry's TCP devmem patchset and latest
net-next, now sharing common infra e.g.:
* netmem_t and net_iovs
* Page pool memory provider
* The registered buffer (rbuf) completion queue where completions from
io_recvzc requests are posted is removed. Now these post into the main
completion queue, using big (32-byte) CQEs. The first 16 bytes is an
ordinary CQE, while the latter 16 bytes contain the io_uring_rbuf_cqe
as before. This vastly simplifies the uAPI and removes a level of
indirection in userspace when looking for payloads.
* The rbuf refill queue is still needed for userspace to return
buffers to kernel.
* Simplified code and uAPI on the io_uring side, particularly
io_recvzc() and io_zc_rx_recv(). Many unnecessary lines were removed
e.g. extra msg flags, readlen, etc.
Changes in RFC v3:
------------------
* Rebased on top of Jakub Kicinski's memory provider API RFC. The ZC
pool added is now a backend for memory provider.
* We're also reusing ppiov infrastructure. The refcounting rules stay
the same but it's shifted into ppiov->refcount. That lets us to
flexibly manage buffer lifetimes without adding any extra code to the
common networking paths. It'd also make it easier to support dmabufs
and device memory in the future.
* io_uring also knows about pages, and so ppiovs might unnecessarily
break tools inspecting data, that can easily be solved later.
Many patches are not for upstream as they depend on work in progress,
namely from Mina:
* struct netmem_t
* Driver ndo commands for Rx queue configs
* struct page_pool_iov and shared pp infra
Changes in RFC v2:
------------------
* Added copy fallback support if userspace memory allocated for ZC Rx
runs out, or if header splitting or flow steering fails.
* Added veth support for ZC Rx, for testing and demonstration. We will
need to figure out what driver would be best for such testing
functionality in the future. Perhaps netdevsim?
* Added socket registration API to io_uring to associate specific
sockets with ifqs/Rx queues for ZC.
* Added multi-socket support, such that multiple connections can be
steered into the same hardware Rx queue.
* Added Netbench server/client support.
David Wei (5):
net: page pool: add helper creating area from pages
io_uring/zcrx: add interface queue and refill queue
io_uring/zcrx: add io_zcrx_area
io_uring/zcrx: add io_recvzc request
io_uring/zcrx: set pp memory provider for an rx queue
Jakub Kicinski (1):
net: page_pool: create hooks for custom page providers
Pavel Begunkov (9):
net: devmem: pull struct definitions out of ifdef
net: prefix devmem specific helpers
net: generalise net_iov chunk owners
net: prepare for non devmem TCP memory providers
net: page_pool: add ->scrub mem provider callback
net: add helper executing custom callback from napi
io_uring/zcrx: implement zerocopy receive pp memory provider
io_uring/zcrx: add copy fallback
io_uring/zcrx: throttle receive requests
include/linux/io_uring/net.h | 5 +
include/linux/io_uring_types.h | 3 +
include/net/busy_poll.h | 6 +
include/net/netmem.h | 21 +-
include/net/page_pool/types.h | 27 ++
include/uapi/linux/io_uring.h | 54 +++
io_uring/Makefile | 1 +
io_uring/io_uring.c | 7 +
io_uring/io_uring.h | 10 +
io_uring/memmap.c | 8 +
io_uring/net.c | 81 ++++
io_uring/opdef.c | 16 +
io_uring/register.c | 7 +
io_uring/rsrc.c | 2 +-
io_uring/rsrc.h | 1 +
io_uring/zcrx.c | 847 +++++++++++++++++++++++++++++++++
io_uring/zcrx.h | 74 +++
net/core/dev.c | 53 +++
net/core/devmem.c | 44 +-
net/core/devmem.h | 71 ++-
net/core/page_pool.c | 81 +++-
net/core/page_pool_user.c | 15 +-
net/ipv4/tcp.c | 8 +-
23 files changed, 1364 insertions(+), 78 deletions(-)
create mode 100644 io_uring/zcrx.c
create mode 100644 io_uring/zcrx.h
--
2.43.5
next reply other threads:[~2024-10-07 22:16 UTC|newest]
Thread overview: 126+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-10-07 22:15 David Wei [this message]
2024-10-07 22:15 ` [PATCH v1 01/15] net: devmem: pull struct definitions out of ifdef David Wei
2024-10-09 20:17 ` Mina Almasry
2024-10-09 23:16 ` Pavel Begunkov
2024-10-10 18:01 ` Mina Almasry
2024-10-10 18:57 ` Pavel Begunkov
2024-10-13 22:38 ` Pavel Begunkov
2024-10-07 22:15 ` [PATCH v1 02/15] net: prefix devmem specific helpers David Wei
2024-10-09 20:19 ` Mina Almasry
2024-10-07 22:15 ` [PATCH v1 03/15] net: generalise net_iov chunk owners David Wei
2024-10-08 15:46 ` Stanislav Fomichev
2024-10-08 16:34 ` Pavel Begunkov
2024-10-09 16:28 ` Stanislav Fomichev
2024-10-11 18:44 ` David Wei
2024-10-11 22:02 ` Pavel Begunkov
2024-10-11 22:25 ` Mina Almasry
2024-10-11 23:12 ` Pavel Begunkov
2024-10-09 20:44 ` Mina Almasry
2024-10-09 22:13 ` Pavel Begunkov
2024-10-09 22:19 ` Pavel Begunkov
2024-10-07 22:15 ` [PATCH v1 04/15] net: page_pool: create hooks for custom page providers David Wei
2024-10-09 20:49 ` Mina Almasry
2024-10-09 22:02 ` Pavel Begunkov
2024-10-07 22:15 ` [PATCH v1 05/15] net: prepare for non devmem TCP memory providers David Wei
2024-10-09 20:56 ` Mina Almasry
2024-10-09 21:45 ` Pavel Begunkov
2024-10-13 22:33 ` Pavel Begunkov
2024-10-07 22:15 ` [PATCH v1 06/15] net: page_pool: add ->scrub mem provider callback David Wei
2024-10-09 21:00 ` Mina Almasry
2024-10-09 21:59 ` Pavel Begunkov
2024-10-10 17:54 ` Mina Almasry
2024-10-13 17:25 ` David Wei
2024-10-14 13:37 ` Pavel Begunkov
2024-10-14 22:58 ` Mina Almasry
2024-10-16 17:42 ` Pavel Begunkov
2024-11-01 17:18 ` Mina Almasry
2024-11-01 18:35 ` Pavel Begunkov
2024-11-01 19:24 ` Mina Almasry
2024-11-01 21:38 ` Pavel Begunkov
2024-11-04 20:42 ` Mina Almasry
2024-11-04 21:27 ` Pavel Begunkov
2024-10-07 22:15 ` [PATCH v1 07/15] net: page pool: add helper creating area from pages David Wei
2024-10-09 21:11 ` Mina Almasry
2024-10-09 21:34 ` Pavel Begunkov
2024-10-07 22:15 ` [PATCH v1 08/15] net: add helper executing custom callback from napi David Wei
2024-10-08 22:25 ` Joe Damato
2024-10-09 15:09 ` Pavel Begunkov
2024-10-09 16:13 ` Joe Damato
2024-10-09 19:12 ` Pavel Begunkov
2024-10-07 22:15 ` [PATCH v1 09/15] io_uring/zcrx: add interface queue and refill queue David Wei
2024-10-09 17:50 ` Jens Axboe
2024-10-09 18:09 ` Jens Axboe
2024-10-09 19:08 ` Pavel Begunkov
2024-10-11 22:11 ` Pavel Begunkov
2024-10-13 17:32 ` David Wei
2024-10-07 22:15 ` [PATCH v1 10/15] io_uring/zcrx: add io_zcrx_area David Wei
2024-10-09 18:02 ` Jens Axboe
2024-10-09 19:05 ` Pavel Begunkov
2024-10-09 19:06 ` Jens Axboe
2024-10-09 21:29 ` Mina Almasry
2024-10-07 22:15 ` [PATCH v1 11/15] io_uring/zcrx: implement zerocopy receive pp memory provider David Wei
2024-10-09 18:10 ` Jens Axboe
2024-10-09 22:01 ` Mina Almasry
2024-10-09 22:58 ` Pavel Begunkov
2024-10-10 18:19 ` Mina Almasry
2024-10-10 20:26 ` Pavel Begunkov
2024-10-10 20:53 ` Mina Almasry
2024-10-10 20:58 ` Mina Almasry
2024-10-10 21:22 ` Pavel Begunkov
2024-10-11 0:32 ` Mina Almasry
2024-10-11 1:49 ` Pavel Begunkov
2024-10-07 22:16 ` [PATCH v1 12/15] io_uring/zcrx: add io_recvzc request David Wei
2024-10-09 18:28 ` Jens Axboe
2024-10-09 18:51 ` Pavel Begunkov
2024-10-09 19:01 ` Jens Axboe
2024-10-09 19:27 ` Pavel Begunkov
2024-10-09 19:42 ` Jens Axboe
2024-10-09 19:47 ` Pavel Begunkov
2024-10-09 19:50 ` Jens Axboe
2024-10-07 22:16 ` [PATCH v1 13/15] io_uring/zcrx: add copy fallback David Wei
2024-10-08 15:58 ` Stanislav Fomichev
2024-10-08 16:39 ` Pavel Begunkov
2024-10-08 16:40 ` David Wei
2024-10-09 16:30 ` Stanislav Fomichev
2024-10-09 23:05 ` Pavel Begunkov
2024-10-11 6:22 ` David Wei
2024-10-11 14:43 ` Stanislav Fomichev
2024-10-09 18:38 ` Jens Axboe
2024-10-07 22:16 ` [PATCH v1 14/15] io_uring/zcrx: set pp memory provider for an rx queue David Wei
2024-10-09 18:42 ` Jens Axboe
2024-10-10 13:09 ` Pavel Begunkov
2024-10-10 13:19 ` Jens Axboe
2024-10-07 22:16 ` [PATCH v1 15/15] io_uring/zcrx: throttle receive requests David Wei
2024-10-09 18:43 ` Jens Axboe
2024-10-07 22:20 ` [PATCH v1 00/15] io_uring zero copy rx David Wei
2024-10-08 23:10 ` Joe Damato
2024-10-09 15:07 ` Pavel Begunkov
2024-10-09 16:10 ` Joe Damato
2024-10-09 16:12 ` Jens Axboe
2024-10-11 6:15 ` David Wei
2024-10-09 15:27 ` Jens Axboe
2024-10-09 15:38 ` David Ahern
2024-10-09 15:43 ` Jens Axboe
2024-10-09 15:49 ` Pavel Begunkov
2024-10-09 15:50 ` Jens Axboe
2024-10-09 16:35 ` David Ahern
2024-10-09 16:50 ` Jens Axboe
2024-10-09 16:53 ` Jens Axboe
2024-10-09 17:12 ` Jens Axboe
2024-10-10 14:21 ` Jens Axboe
2024-10-10 15:03 ` David Ahern
2024-10-10 15:15 ` Jens Axboe
2024-10-10 18:11 ` Jens Axboe
2024-10-14 8:42 ` David Laight
2024-10-09 16:55 ` Mina Almasry
2024-10-09 16:57 ` Jens Axboe
2024-10-09 19:32 ` Mina Almasry
2024-10-09 19:43 ` Pavel Begunkov
2024-10-09 19:47 ` Jens Axboe
2024-10-09 17:19 ` David Ahern
2024-10-09 18:21 ` Pedro Tammela
2024-10-10 13:19 ` Pavel Begunkov
2024-10-11 0:35 ` David Wei
2024-10-11 14:28 ` Pedro Tammela
2024-10-11 0:29 ` David Wei
2024-10-11 19:43 ` Mina Almasry
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20241007221603.1703699-1-dw@davidwei.uk \
--to=dw@davidwei.uk \
--cc=almasrymina@google.com \
--cc=asml.silence@gmail.com \
--cc=axboe@kernel.dk \
--cc=davem@davemloft.net \
--cc=dsahern@kernel.org \
--cc=edumazet@google.com \
--cc=hawk@kernel.org \
--cc=io-uring@vger.kernel.org \
--cc=kuba@kernel.org \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).