netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC Optimizing veth xsk performance 00/10]
@ 2023-08-03 14:04 huangjie.albert
  2023-08-03 14:04 ` [RFC Optimizing veth xsk performance 01/10] veth: Implement ethtool's get_ringparam() callback huangjie.albert
                   ` (11 more replies)
  0 siblings, 12 replies; 22+ messages in thread
From: huangjie.albert @ 2023-08-03 14:04 UTC (permalink / raw)
  To: davem, edumazet, kuba, pabeni
  Cc: huangjie.albert, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, Björn Töpel,
	Magnus Karlsson, Maciej Fijalkowski, Jonathan Lemon,
	Pavel Begunkov, Yunsheng Lin, Kees Cook, Richard Gobert,
	open list:NETWORKING DRIVERS, open list,
	open list:XDP (eXpress Data Path)

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=yes, Size: 3268 bytes --]

AF_XDP is a kernel bypass technology that can greatly improve performance.
However, for virtual devices like veth, even with the use of AF_XDP sockets,
there are still many additional software paths that consume CPU resources. 
This patch series focuses on optimizing the performance of AF_XDP sockets 
for veth virtual devices. Patches 1 to 4 mainly involve preparatory work. 
Patch 5 introduces tx queue and tx napi for packet transmission, while 
patch 9 primarily implements zero-copy, and patch 10 adds support for 
batch sending of IPv4 UDP packets. These optimizations significantly reduce 
the software path and support checksum offload.

I tested those feature with
A typical topology is shown below:
veth<-->veth-peer                                    veth1-peer<--->veth1
	1       |                                                  |   7
	        |2                                                6|
	        |                                                  |
	      bridge<------->eth0(mlnx5)- switch -eth1(mlnx5)<--->bridge1
                  3                    4                 5    
             (machine1)                              (machine2)    
AF_XDP socket is attach to veth and veth1. and send packets to physical NIC(eth0)
veth:(172.17.0.2/24)
bridge:(172.17.0.1/24)
eth0:(192.168.156.66/24)

eth1(172.17.0.2/24)
bridge1:(172.17.0.1/24)
eth0:(192.168.156.88/24)

after set default route、snat、dnat. we can have a tests
to get the performance results.

packets send from veth to veth1:
af_xdp test tool:
link:https://github.com/cclinuxer/libxudp
send:(veth)
./objs/xudpperf send --dst 192.168.156.88:6002 -l 1300
recv:(veth1)
./objs/xudpperf recv --src 172.17.0.2:6002

udp test tool:iperf3
send:(veth)
iperf3 -c 192.168.156.88 -p 6002 -l 1300 -b 60G -u
recv:(veth1)
iperf3 -s -p 6002

performance:
performance:(test weth libxdp lib)
UDP                              : 250 Kpps (with 100% cpu)
AF_XDP   no  zerocopy + no batch : 480 Kpps (with ksoftirqd 100% cpu)
AF_XDP  with zerocopy + no batch : 540 Kpps (with ksoftirqd 100% cpu)
AF_XDP  with  batch  +  zerocopy : 1.5 Mpps (with ksoftirqd 15% cpu)

With af_xdp batch, the libxdp user-space program reaches a bottleneck.
Therefore, the softirq did not reach the limit.

This is just an RFC patch series, and some code details still need 
further consideration. Please review this proposal.

thanks!

huangjie.albert (10):
  veth: Implement ethtool's get_ringparam() callback
  xsk: add dma_check_skip for  skipping dma check
  veth: add support for send queue
  xsk: add xsk_tx_completed_addr function
  veth: use send queue tx napi to xmit xsk tx desc
  veth: add ndo_xsk_wakeup callback for veth
  sk_buff: add destructor_arg_xsk_pool for zero copy
  xdp: add xdp_mem_type MEM_TYPE_XSK_BUFF_POOL_TX
  veth: support zero copy for af xdp
  veth: af_xdp tx batch support for ipv4 udp

 drivers/net/veth.c          | 729 +++++++++++++++++++++++++++++++++++-
 include/linux/skbuff.h      |   1 +
 include/net/xdp.h           |   1 +
 include/net/xdp_sock_drv.h  |   1 +
 include/net/xsk_buff_pool.h |   1 +
 net/xdp/xsk.c               |   6 +
 net/xdp/xsk_buff_pool.c     |   3 +-
 net/xdp/xsk_queue.h         |  11 +
 8 files changed, 751 insertions(+), 2 deletions(-)

-- 
2.20.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2023-08-04 21:13 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-08-03 14:04 [RFC Optimizing veth xsk performance 00/10] huangjie.albert
2023-08-03 14:04 ` [RFC Optimizing veth xsk performance 01/10] veth: Implement ethtool's get_ringparam() callback huangjie.albert
2023-08-04 20:41   ` Simon Horman
2023-08-03 14:04 ` [RFC Optimizing veth xsk performance 02/10] xsk: add dma_check_skip for skipping dma check huangjie.albert
2023-08-04 20:42   ` Simon Horman
2023-08-03 14:04 ` [RFC Optimizing veth xsk performance 03/10] veth: add support for send queue huangjie.albert
2023-08-04 20:44   ` Simon Horman
2023-08-03 14:04 ` [RFC Optimizing veth xsk performance 04/10] xsk: add xsk_tx_completed_addr function huangjie.albert
2023-08-04 20:46   ` Simon Horman
2023-08-03 14:04 ` [RFC Optimizing veth xsk performance 05/10] veth: use send queue tx napi to xmit xsk tx desc huangjie.albert
2023-08-04 20:59   ` Simon Horman
2023-08-03 14:04 ` [RFC Optimizing veth xsk performance 06/10] veth: add ndo_xsk_wakeup callback for veth huangjie.albert
2023-08-04 21:01   ` Simon Horman
2023-08-03 14:04 ` [RFC Optimizing veth xsk performance 07/10] sk_buff: add destructor_arg_xsk_pool for zero copy huangjie.albert
2023-08-03 14:04 ` [RFC Optimizing veth xsk performance 08/10] xdp: add xdp_mem_type MEM_TYPE_XSK_BUFF_POOL_TX huangjie.albert
2023-08-03 14:04 ` [RFC Optimizing veth xsk performance 09/10] veth: support zero copy for af xdp huangjie.albert
2023-08-04 21:05   ` Simon Horman
2023-08-03 14:04 ` [RFC Optimizing veth xsk performance 10/10] veth: af_xdp tx batch support for ipv4 udp huangjie.albert
2023-08-04 21:12   ` Simon Horman
2023-08-03 14:20 ` [RFC Optimizing veth xsk performance 00/10] Paolo Abeni
2023-08-04  4:16   ` [External] " 黄杰
2023-08-03 15:01 ` Jesper Dangaard Brouer

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).