From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jesper Dangaard Brouer Subject: Re: Optimizing instruction-cache, more packets at each stage Date: Fri, 22 Jan 2016 13:33:41 +0100 Message-ID: <20160122133341.0f174115@redhat.com> References: <20160115142223.1e92be75@redhat.com> <20160115.154721.458450438918273509.davem@davemloft.net> <20160118112703.6eac71ca@redhat.com> <20160118.112455.212265265553435873.davem@davemloft.net> <1453330945.1223.329.camel@edumazet-glaptop2.roam.corp.google.com> <20160121132304.12ff9d23@redhat.com> <1453398516.1223.376.camel@edumazet-glaptop2.roam.corp.google.com> Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Cc: Tom Herbert , Or Gerlitz , David Miller , Eric Dumazet , Linux Netdev List , Alexander Duyck , Alexei Starovoitov , Daniel Borkmann , Marek Majkowski , Hannes Frederic Sowa , Florian Westphal , Paolo Abeni , John Fastabend , Amir Vadai , brouer@redhat.com To: Eric Dumazet Return-path: Received: from mx1.redhat.com ([209.132.183.28]:55846 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753169AbcAVMdv (ORCPT ); Fri, 22 Jan 2016 07:33:51 -0500 In-Reply-To: <1453398516.1223.376.camel@edumazet-glaptop2.roam.corp.google.com> Sender: netdev-owner@vger.kernel.org List-ID: On Thu, 21 Jan 2016 09:48:36 -0800 Eric Dumazet wrote: > On Thu, 2016-01-21 at 08:38 -0800, Tom Herbert wrote: > > > Sure, but the receive path is parallelized. > > This is true for multiqueue processing, assuming you can dedicate many > cores to process RX. > > > Improving parallelism has > > continuously shown to have much more impact than attempting to > > optimize for cache misses. The primary goal is not to drive 100Gbps > > with 64 packets from a single CPU. It is one benchmark of many we > > should look at to measure efficiency of the data path, but I've yet to > > see any real workload that requires that... > > > > Regardless of anything, we need to load packet headers into CPU cache > > to do protocol processing. I'm not sure I see how trying to defer that > > as long as possible helps except in cases where the packet is crossing > > CPU cache boundaries and can eliminate cache misses completely (not > > just move them around from one function to another). > > Note that some user space use multiple core (or hyper threads) to > implement a pipeline, using a single RX queue. > > One thread can handle one stage (device RX drain) and prefetch data into > shared L1/L2 (and/or shared L3 for pipelines with more than 2 threads) > > The second thread process packets with headers already in L1/L2 I agree. I've heard experiences where DPDK users use 2 core for RX, and 1 core for TX, and achieve 10G wirespeed (14Mpps) real IPv4 forwarding with full Internet routing table look up. One of the ideas behind my alf_queue, is that it can be used for efficiently distributing object (pointers) between threads. 1. because it only transfers the pointers (not touching object), and 2. because it enqueue/dequeue multiple objects with a single locked cmpxchg. Thus, lower in the message passing cost between threads. > This way, the ~100 ns (or even more if you also consider skb > allocations) penalty to bring packet headers do not hurt PPS. I've studied the allocation cost in great detail, thus let me share my numbers, 100 ns is too high: Total cost of alloc+free for 256 byte objects (on CPU i7-4790K @ 4.00GHz). The cycles count should be comparable with other CPUs, but that nanosec measurement is affected by the very high clock freq of this CPU. Kmem_cache fastpath "recycle" case: SLUB => 44 cycles(tsc) 11.205 ns SLAB => 96 cycles(tsc) 24.119 ns. The problem is that real use-cases in the network stack, almost always hit the slowpath in kmem_cache allocators. Kmem_cache "slowpath" case: SLUB => 117 cycles(tsc) 29.276 ns SLAB => 101 cycles(tsc) 25.342 ns I've addressed this "slowpath" problem in the SLUB and SLAB allocators, by introducing a bulk API, which amortize the needed sync-mechanisms. Kmem_cache using bulk API: SLUB => 37 cycles(tsc) 9.280 ns SLAB => 20 cycles(tsc) 5.035 ns -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat Author of http://www.iptv-analyzer.org LinkedIn: http://www.linkedin.com/in/brouer