From: Pavel Begunkov <asml.silence@gmail.com>
To: Jens Axboe <axboe@kernel.dk>, David Ahern <dsahern@kernel.org>,
David Wei <dw@davidwei.uk>,
io-uring@vger.kernel.org, netdev@vger.kernel.org
Cc: Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
"David S. Miller" <davem@davemloft.net>,
Eric Dumazet <edumazet@google.com>,
Jesper Dangaard Brouer <hawk@kernel.org>,
Mina Almasry <almasrymina@google.com>
Subject: Re: [PATCH v1 00/15] io_uring zero copy rx
Date: Wed, 9 Oct 2024 16:49:15 +0100 [thread overview]
Message-ID: <b57dd3b9-b607-46ab-a2d0-98aedb2772f7@gmail.com> (raw)
In-Reply-To: <2144827e-ad7e-4cea-8e38-05fb310a85f5@kernel.dk>
On 10/9/24 16:43, Jens Axboe wrote:
> On 10/9/24 9:38 AM, David Ahern wrote:
>> On 10/9/24 9:27 AM, Jens Axboe wrote:
>>> On 10/7/24 4:15 PM, David Wei wrote:
>>>> ===========
>>>> Performance
>>>> ===========
>>>>
>>>> Test setup:
>>>> * AMD EPYC 9454
>>>> * Broadcom BCM957508 200G
>>>> * Kernel v6.11 base [2]
>>>> * liburing fork [3]
>>>> * kperf fork [4]
>>>> * 4K MTU
>>>> * Single TCP flow
>>>>
>>>> With application thread + net rx softirq pinned to _different_ cores:
>>>>
>>>> epoll
>>>> 82.2 Gbps
>>>>
>>>> io_uring
>>>> 116.2 Gbps (+41%)
>>>>
>>>> Pinned to _same_ core:
>>>>
>>>> epoll
>>>> 62.6 Gbps
>>>>
>>>> io_uring
>>>> 80.9 Gbps (+29%)
>>>
>>> I'll review the io_uring bits in detail, but I did take a quick look and
>>> overall it looks really nice.
>>>
>>> I decided to give this a spin, as I noticed that Broadcom now has a
>>> 230.x firmware release out that supports this. Hence no dependencies on
>>> that anymore, outside of some pain getting the fw updated. Here are my
>>> test setup details:
>>>
>>> Receiver:
>>> AMD EPYC 9754 (recei
>>> Broadcom P2100G
>>> -git + this series + the bnxt series referenced
>>>
>>> Sender:
>>> Intel(R) Xeon(R) Platinum 8458P
>>> Broadcom P2100G
>>> -git
>>>
>>> Test:
>>> kperf with David's patches to support io_uring zc. Eg single flow TCP,
>>> just testing bandwidth. A single cpu/thread being used on both the
>>> receiver and sender side.
>>>
>>> non-zc
>>> 60.9 Gbps
>>>
>>> io_uring + zc
>>> 97.1 Gbps
>>
>> so line rate? Did you look at whether there is cpu to spare? meaning it
>> will report higher speeds with a 200G setup?
>
> Yep basically line rate, I get 97-98Gbps. I originally used a slower box
> as the sender, but then you're capped on the non-zc sender being too
> slow. The intel box does better, but it's still basically maxing out the
> sender at this point. So yeah, with a faster (or more efficient sender),
> I have no doubts this will go much higher per thread, if the link bw was
> there. When I looked at CPU usage for the receiver, the thread itself is
> using ~30% CPU. And then there's some softirq/irq time outside of that,
> but that should ammortize with higher bps rates too I'd expect.
>
> My nic does have 2 100G ports, so might warrant a bit more testing...
If you haven't done it already, I'd also pin softirq processing to
the same CPU as the app so we measure the full stack. kperf has an
option IIRC.
--
Pavel Begunkov
next prev parent reply other threads:[~2024-10-09 15:48 UTC|newest]
Thread overview: 126+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-10-07 22:15 [PATCH v1 00/15] io_uring zero copy rx David Wei
2024-10-07 22:15 ` [PATCH v1 01/15] net: devmem: pull struct definitions out of ifdef David Wei
2024-10-09 20:17 ` Mina Almasry
2024-10-09 23:16 ` Pavel Begunkov
2024-10-10 18:01 ` Mina Almasry
2024-10-10 18:57 ` Pavel Begunkov
2024-10-13 22:38 ` Pavel Begunkov
2024-10-07 22:15 ` [PATCH v1 02/15] net: prefix devmem specific helpers David Wei
2024-10-09 20:19 ` Mina Almasry
2024-10-07 22:15 ` [PATCH v1 03/15] net: generalise net_iov chunk owners David Wei
2024-10-08 15:46 ` Stanislav Fomichev
2024-10-08 16:34 ` Pavel Begunkov
2024-10-09 16:28 ` Stanislav Fomichev
2024-10-11 18:44 ` David Wei
2024-10-11 22:02 ` Pavel Begunkov
2024-10-11 22:25 ` Mina Almasry
2024-10-11 23:12 ` Pavel Begunkov
2024-10-09 20:44 ` Mina Almasry
2024-10-09 22:13 ` Pavel Begunkov
2024-10-09 22:19 ` Pavel Begunkov
2024-10-07 22:15 ` [PATCH v1 04/15] net: page_pool: create hooks for custom page providers David Wei
2024-10-09 20:49 ` Mina Almasry
2024-10-09 22:02 ` Pavel Begunkov
2024-10-07 22:15 ` [PATCH v1 05/15] net: prepare for non devmem TCP memory providers David Wei
2024-10-09 20:56 ` Mina Almasry
2024-10-09 21:45 ` Pavel Begunkov
2024-10-13 22:33 ` Pavel Begunkov
2024-10-07 22:15 ` [PATCH v1 06/15] net: page_pool: add ->scrub mem provider callback David Wei
2024-10-09 21:00 ` Mina Almasry
2024-10-09 21:59 ` Pavel Begunkov
2024-10-10 17:54 ` Mina Almasry
2024-10-13 17:25 ` David Wei
2024-10-14 13:37 ` Pavel Begunkov
2024-10-14 22:58 ` Mina Almasry
2024-10-16 17:42 ` Pavel Begunkov
2024-11-01 17:18 ` Mina Almasry
2024-11-01 18:35 ` Pavel Begunkov
2024-11-01 19:24 ` Mina Almasry
2024-11-01 21:38 ` Pavel Begunkov
2024-11-04 20:42 ` Mina Almasry
2024-11-04 21:27 ` Pavel Begunkov
2024-10-07 22:15 ` [PATCH v1 07/15] net: page pool: add helper creating area from pages David Wei
2024-10-09 21:11 ` Mina Almasry
2024-10-09 21:34 ` Pavel Begunkov
2024-10-07 22:15 ` [PATCH v1 08/15] net: add helper executing custom callback from napi David Wei
2024-10-08 22:25 ` Joe Damato
2024-10-09 15:09 ` Pavel Begunkov
2024-10-09 16:13 ` Joe Damato
2024-10-09 19:12 ` Pavel Begunkov
2024-10-07 22:15 ` [PATCH v1 09/15] io_uring/zcrx: add interface queue and refill queue David Wei
2024-10-09 17:50 ` Jens Axboe
2024-10-09 18:09 ` Jens Axboe
2024-10-09 19:08 ` Pavel Begunkov
2024-10-11 22:11 ` Pavel Begunkov
2024-10-13 17:32 ` David Wei
2024-10-07 22:15 ` [PATCH v1 10/15] io_uring/zcrx: add io_zcrx_area David Wei
2024-10-09 18:02 ` Jens Axboe
2024-10-09 19:05 ` Pavel Begunkov
2024-10-09 19:06 ` Jens Axboe
2024-10-09 21:29 ` Mina Almasry
2024-10-07 22:15 ` [PATCH v1 11/15] io_uring/zcrx: implement zerocopy receive pp memory provider David Wei
2024-10-09 18:10 ` Jens Axboe
2024-10-09 22:01 ` Mina Almasry
2024-10-09 22:58 ` Pavel Begunkov
2024-10-10 18:19 ` Mina Almasry
2024-10-10 20:26 ` Pavel Begunkov
2024-10-10 20:53 ` Mina Almasry
2024-10-10 20:58 ` Mina Almasry
2024-10-10 21:22 ` Pavel Begunkov
2024-10-11 0:32 ` Mina Almasry
2024-10-11 1:49 ` Pavel Begunkov
2024-10-07 22:16 ` [PATCH v1 12/15] io_uring/zcrx: add io_recvzc request David Wei
2024-10-09 18:28 ` Jens Axboe
2024-10-09 18:51 ` Pavel Begunkov
2024-10-09 19:01 ` Jens Axboe
2024-10-09 19:27 ` Pavel Begunkov
2024-10-09 19:42 ` Jens Axboe
2024-10-09 19:47 ` Pavel Begunkov
2024-10-09 19:50 ` Jens Axboe
2024-10-07 22:16 ` [PATCH v1 13/15] io_uring/zcrx: add copy fallback David Wei
2024-10-08 15:58 ` Stanislav Fomichev
2024-10-08 16:39 ` Pavel Begunkov
2024-10-08 16:40 ` David Wei
2024-10-09 16:30 ` Stanislav Fomichev
2024-10-09 23:05 ` Pavel Begunkov
2024-10-11 6:22 ` David Wei
2024-10-11 14:43 ` Stanislav Fomichev
2024-10-09 18:38 ` Jens Axboe
2024-10-07 22:16 ` [PATCH v1 14/15] io_uring/zcrx: set pp memory provider for an rx queue David Wei
2024-10-09 18:42 ` Jens Axboe
2024-10-10 13:09 ` Pavel Begunkov
2024-10-10 13:19 ` Jens Axboe
2024-10-07 22:16 ` [PATCH v1 15/15] io_uring/zcrx: throttle receive requests David Wei
2024-10-09 18:43 ` Jens Axboe
2024-10-07 22:20 ` [PATCH v1 00/15] io_uring zero copy rx David Wei
2024-10-08 23:10 ` Joe Damato
2024-10-09 15:07 ` Pavel Begunkov
2024-10-09 16:10 ` Joe Damato
2024-10-09 16:12 ` Jens Axboe
2024-10-11 6:15 ` David Wei
2024-10-09 15:27 ` Jens Axboe
2024-10-09 15:38 ` David Ahern
2024-10-09 15:43 ` Jens Axboe
2024-10-09 15:49 ` Pavel Begunkov [this message]
2024-10-09 15:50 ` Jens Axboe
2024-10-09 16:35 ` David Ahern
2024-10-09 16:50 ` Jens Axboe
2024-10-09 16:53 ` Jens Axboe
2024-10-09 17:12 ` Jens Axboe
2024-10-10 14:21 ` Jens Axboe
2024-10-10 15:03 ` David Ahern
2024-10-10 15:15 ` Jens Axboe
2024-10-10 18:11 ` Jens Axboe
2024-10-14 8:42 ` David Laight
2024-10-09 16:55 ` Mina Almasry
2024-10-09 16:57 ` Jens Axboe
2024-10-09 19:32 ` Mina Almasry
2024-10-09 19:43 ` Pavel Begunkov
2024-10-09 19:47 ` Jens Axboe
2024-10-09 17:19 ` David Ahern
2024-10-09 18:21 ` Pedro Tammela
2024-10-10 13:19 ` Pavel Begunkov
2024-10-11 0:35 ` David Wei
2024-10-11 14:28 ` Pedro Tammela
2024-10-11 0:29 ` David Wei
2024-10-11 19:43 ` Mina Almasry
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b57dd3b9-b607-46ab-a2d0-98aedb2772f7@gmail.com \
--to=asml.silence@gmail.com \
--cc=almasrymina@google.com \
--cc=axboe@kernel.dk \
--cc=davem@davemloft.net \
--cc=dsahern@kernel.org \
--cc=dw@davidwei.uk \
--cc=edumazet@google.com \
--cc=hawk@kernel.org \
--cc=io-uring@vger.kernel.org \
--cc=kuba@kernel.org \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).