From mboxrd@z Thu Jan 1 00:00:00 1970 From: David Ahern Subject: Re: Kernel 4.19 network performance - forwarding/routing normal users traffic Date: Wed, 31 Oct 2018 21:37:16 -0600 Message-ID: <3a88bb53-9d17-3e85-638e-a605f5bfe0fb@gmail.com> References: <61697e49-e839-befc-8330-fc00187c48ee@itcare.pl> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit To: =?UTF-8?Q?Pawe=c5=82_Staszewski?= , netdev Return-path: Received: from mail-pl1-f171.google.com ([209.85.214.171]:33628 "EHLO mail-pl1-f171.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726419AbeKAMi0 (ORCPT ); Thu, 1 Nov 2018 08:38:26 -0400 Received: by mail-pl1-f171.google.com with SMTP id x6-v6so8273776pln.0 for ; Wed, 31 Oct 2018 20:37:20 -0700 (PDT) In-Reply-To: <61697e49-e839-befc-8330-fc00187c48ee@itcare.pl> Content-Language: en-US Sender: netdev-owner@vger.kernel.org List-ID: On 10/31/18 3:57 PM, Paweł Staszewski wrote: > Hi > > So maybee someone will be interested how linux kernel handles normal > traffic (not pktgen :) ) > > > Server HW configuration: > > CPU : Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz > > NIC's: 2x 100G Mellanox ConnectX-4 (connected to x16 pcie 8GT) > > > Server software: > > FRR - as routing daemon > > enp175s0f0 (100G) - 16 vlans from upstreams (28 RSS binded to local numa > node) > > enp175s0f1 (100G) - 343 vlans to clients (28 RSS binded to local numa node) > > > Maximum traffic that server can handle: > > Bandwidth > >  bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help >   input: /proc/net/dev type: rate >   \         iface                   Rx Tx                Total > ============================================================================== > >        enp175s0f1:          28.51 Gb/s           37.24 Gb/s           > 65.74 Gb/s >        enp175s0f0:          38.07 Gb/s           28.44 Gb/s           > 66.51 Gb/s > ------------------------------------------------------------------------------ > >             total:          66.58 Gb/s           65.67 Gb/s          > 132.25 Gb/s > > > Packets per second: > >  bwm-ng v0.6.1 (probing every 1.000s), press 'h' for help >   input: /proc/net/dev type: rate >   -         iface                   Rx Tx                Total > ============================================================================== > >        enp175s0f1:      5248589.00 P/s       3486617.75 P/s 8735207.00 P/s >        enp175s0f0:      3557944.25 P/s       5232516.00 P/s 8790460.00 P/s > ------------------------------------------------------------------------------ > >             total:      8806533.00 P/s       8719134.00 P/s 17525668.00 P/s > > > After reaching that limits nics on the upstream side (more RX traffic) > start to drop packets > > > I just dont understand that server can't handle more bandwidth > (~40Gbit/s is limit where all cpu's are 100% util) - where pps on RX > side are increasing. > > Was thinking that maybee reached some pcie x16 limit - but x16 8GT is > 126Gbit - and also when testing with pktgen i can reach more bw and pps > (like 4x more comparing to normal internet traffic) > > And wondering if there is something that can be improved here. This is mainly a forwarding use case? Seems so based on the perf report. I suspect forwarding with XDP would show pretty good improvement. You need the vlan changes I have queued up though.