From: Paolo Abeni <pabeni@redhat.com>
To: "huangjie.albert" <huangjie.albert@bytedance.com>,
davem@davemloft.net, edumazet@google.com, kuba@kernel.org
Cc: "Alexei Starovoitov" <ast@kernel.org>,
"Daniel Borkmann" <daniel@iogearbox.net>,
"Jesper Dangaard Brouer" <hawk@kernel.org>,
"John Fastabend" <john.fastabend@gmail.com>,
"Björn Töpel" <bjorn@kernel.org>,
"Magnus Karlsson" <magnus.karlsson@intel.com>,
"Maciej Fijalkowski" <maciej.fijalkowski@intel.com>,
"Jonathan Lemon" <jonathan.lemon@gmail.com>,
"Pavel Begunkov" <asml.silence@gmail.com>,
"Yunsheng Lin" <linyunsheng@huawei.com>,
"Kees Cook" <keescook@chromium.org>,
"Richard Gobert" <richardbgobert@gmail.com>,
"open list:NETWORKING DRIVERS" <netdev@vger.kernel.org>,
"open list" <linux-kernel@vger.kernel.org>,
"open list:XDP (eXpress Data Path)" <bpf@vger.kernel.org>
Subject: Re: [RFC Optimizing veth xsk performance 00/10]
Date: Thu, 03 Aug 2023 16:20:06 +0200 [thread overview]
Message-ID: <a144aa6351412e25bbdf866c0d31b550e6ff3e8a.camel@redhat.com> (raw)
In-Reply-To: <20230803140441.53596-1-huangjie.albert@bytedance.com>
On Thu, 2023-08-03 at 22:04 +0800, huangjie.albert wrote:
> AF_XDP is a kernel bypass technology that can greatly improve performance.
> However, for virtual devices like veth, even with the use of AF_XDP sockets,
> there are still many additional software paths that consume CPU resources.
> This patch series focuses on optimizing the performance of AF_XDP sockets
> for veth virtual devices. Patches 1 to 4 mainly involve preparatory work.
> Patch 5 introduces tx queue and tx napi for packet transmission, while
> patch 9 primarily implements zero-copy, and patch 10 adds support for
> batch sending of IPv4 UDP packets. These optimizations significantly reduce
> the software path and support checksum offload.
>
> I tested those feature with
> A typical topology is shown below:
> veth<-->veth-peer veth1-peer<--->veth1
> 1 | | 7
> |2 6|
> | |
> bridge<------->eth0(mlnx5)- switch -eth1(mlnx5)<--->bridge1
> 3 4 5
> (machine1) (machine2)
> AF_XDP socket is attach to veth and veth1. and send packets to physical NIC(eth0)
> veth:(172.17.0.2/24)
> bridge:(172.17.0.1/24)
> eth0:(192.168.156.66/24)
>
> eth1(172.17.0.2/24)
> bridge1:(172.17.0.1/24)
> eth0:(192.168.156.88/24)
>
> after set default route��?snat��?dnat. we can have a tests
> to get the performance results.
>
> packets send from veth to veth1:
> af_xdp test tool:
> link:https://github.com/cclinuxer/libxudp
> send:(veth)
> ./objs/xudpperf send --dst 192.168.156.88:6002 -l 1300
> recv:(veth1)
> ./objs/xudpperf recv --src 172.17.0.2:6002
>
> udp test tool:iperf3
> send:(veth)
> iperf3 -c 192.168.156.88 -p 6002 -l 1300 -b 60G -u
Should be: '-b 0' otherwise you will experience additional overhead.
And you would likely pin processes and irqs to ensure BH and US run on
different cores of the same numa node.
Cheers,
Paolo
next prev parent reply other threads:[~2023-08-03 14:20 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-08-03 14:04 [RFC Optimizing veth xsk performance 00/10] huangjie.albert
2023-08-03 14:04 ` [RFC Optimizing veth xsk performance 01/10] veth: Implement ethtool's get_ringparam() callback huangjie.albert
2023-08-04 20:41 ` Simon Horman
2023-08-03 14:04 ` [RFC Optimizing veth xsk performance 02/10] xsk: add dma_check_skip for skipping dma check huangjie.albert
2023-08-04 20:42 ` Simon Horman
2023-08-03 14:04 ` [RFC Optimizing veth xsk performance 03/10] veth: add support for send queue huangjie.albert
2023-08-04 20:44 ` Simon Horman
2023-08-03 14:04 ` [RFC Optimizing veth xsk performance 04/10] xsk: add xsk_tx_completed_addr function huangjie.albert
2023-08-04 20:46 ` Simon Horman
2023-08-03 14:04 ` [RFC Optimizing veth xsk performance 05/10] veth: use send queue tx napi to xmit xsk tx desc huangjie.albert
2023-08-04 20:59 ` Simon Horman
2023-08-03 14:04 ` [RFC Optimizing veth xsk performance 06/10] veth: add ndo_xsk_wakeup callback for veth huangjie.albert
2023-08-04 21:01 ` Simon Horman
2023-08-03 14:04 ` [RFC Optimizing veth xsk performance 07/10] sk_buff: add destructor_arg_xsk_pool for zero copy huangjie.albert
2023-08-03 14:04 ` [RFC Optimizing veth xsk performance 08/10] xdp: add xdp_mem_type MEM_TYPE_XSK_BUFF_POOL_TX huangjie.albert
2023-08-03 14:04 ` [RFC Optimizing veth xsk performance 09/10] veth: support zero copy for af xdp huangjie.albert
2023-08-04 21:05 ` Simon Horman
2023-08-03 14:04 ` [RFC Optimizing veth xsk performance 10/10] veth: af_xdp tx batch support for ipv4 udp huangjie.albert
2023-08-04 21:12 ` Simon Horman
2023-08-03 14:20 ` Paolo Abeni [this message]
2023-08-04 4:16 ` [External] Re: [RFC Optimizing veth xsk performance 00/10] 黄杰
2023-08-03 15:01 ` Jesper Dangaard Brouer
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a144aa6351412e25bbdf866c0d31b550e6ff3e8a.camel@redhat.com \
--to=pabeni@redhat.com \
--cc=asml.silence@gmail.com \
--cc=ast@kernel.org \
--cc=bjorn@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=hawk@kernel.org \
--cc=huangjie.albert@bytedance.com \
--cc=john.fastabend@gmail.com \
--cc=jonathan.lemon@gmail.com \
--cc=keescook@chromium.org \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linyunsheng@huawei.com \
--cc=maciej.fijalkowski@intel.com \
--cc=magnus.karlsson@intel.com \
--cc=netdev@vger.kernel.org \
--cc=richardbgobert@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).