From mboxrd@z Thu Jan 1 00:00:00 1970 From: David Ahern Subject: Re: [PATCH v15 ] net/veth/XDP: Line-rate packet forwarding in kernel Date: Tue, 3 Apr 2018 19:16:58 -0600 Message-ID: <5395c7a0-2c1f-27f1-2490-dd4db68fbdc1@gmail.com> References: Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit To: "Md. Islam" , netdev@vger.kernel.org, David Miller , stephen@networkplumber.org, agaceph@gmail.com, Pavel Emelyanov , Eric Dumazet , alexei.starovoitov@gmail.com, brouer@redhat.com Return-path: Received: from mail-pl0-f51.google.com ([209.85.160.51]:37471 "EHLO mail-pl0-f51.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754855AbeDDBRB (ORCPT ); Tue, 3 Apr 2018 21:17:01 -0400 Received: by mail-pl0-f51.google.com with SMTP id v5-v6so9669684plo.4 for ; Tue, 03 Apr 2018 18:17:01 -0700 (PDT) In-Reply-To: Content-Language: en-US Sender: netdev-owner@vger.kernel.org List-ID: On 4/1/18 6:47 PM, Md. Islam wrote: > This patch implements IPv4 forwarding on xdp_buff. I added a new > config option XDP_ROUTER. Kernel would forward packets through fast > path when this option is enabled. But it would require driver support. > Currently it only works with veth. Here I have modified veth such that > it outputs xdp_buff. I created a testbed in Mininet. The Mininet > script (topology.py) is attached. Here the topology is: > > h1 -----r1-----h2 (r1 acts as a router) > > This patch improves the throughput from 53.8Gb/s to 60Gb/s on my > machine. Median RTT also improved from around .055 ms to around .035 > ms. > > Then I disabled hyperthreading and cpu frequency scaling in order to > utilize CPU cache (DPDK also utilizes CPU cache to improve > forwarding). This further improves per-packet forwarding latency from > around 400ns to 200 ns. More specifically, header parsing and fib > lookup only takes around 82 ns. This shows that this could be used to > implement linerate packet forwarding in kernel. > > The patch has been generated on 4.15.0+. Please let me know your > feedback and suggestions. Please feel free to let me know if this > approach make sense. This patch is not really using eBPF and XDP but rather trying to shortcircuit forwarding through a veth pair. Have you looked at the loss in performance with this config enabled if there is no r1? i.e., h1 {veth1} <---> {veth2} / h2. You are adding a lookup per-packet to the Tx path. Have you looked at what I would consider a more interesting use case of packets into a node and delivered to a namespace via veth? +--------------------------+--------------- | Host | container | | | +-------{ veth1 }-|-{veth2}---- | | | +----{ eth1 }------------------ Can xdp / bpf on eth1 be used to speed up delivery to the container?