From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 36C56C43381 for ; Fri, 22 Mar 2019 11:28:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 11532218B0 for ; Fri, 22 Mar 2019 11:28:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729703AbfCVL2l (ORCPT ); Fri, 22 Mar 2019 07:28:41 -0400 Received: from mail-wr1-f65.google.com ([209.85.221.65]:41670 "EHLO mail-wr1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728239AbfCVL2j (ORCPT ); Fri, 22 Mar 2019 07:28:39 -0400 Received: by mail-wr1-f65.google.com with SMTP id p1so1926638wrs.8 for ; Fri, 22 Mar 2019 04:28:37 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=Q1ujpRhf0BBppz35osqofQfYo2sA5QAys6hDn1EJooc=; b=BEhyk/TnJph4aYXszOegNvYFt4uZSmBPtB8fR4FN2iu7wDJq+AJn+IZsJZEVSzmu2P +5wzRXkOLJAx0VEV6I2wTA1JfjzjJ5BC8OvUQqVlz/EBrCJAE9slFVDWF9+v+6U9rmkX 5wXgXM8dVi61BbJqxyEpVxjln0qiVly2aJR7FnSQBol0JrJ6+i3/Bw9kK6gGtQmipj2o fYNxjVV98OupX6eaFcRtWpvV2scXFUFzhmRqm+k4hJ5hBVPoNaYKntRXyyJYfHEIWC5G NFvR/DXX/snan8WbMaDyzZZX4lRqRkVk/EbeU5AD7kOCS9t/GuR5baNGMbQyeUTUuKEr FzNw== X-Gm-Message-State: APjAAAUddwzATVnTEKK88Gi+t2pgIXbOKC/HFD8RkSc4N5zHO1MxP6/3 YRQDj60s85l7ESjnylI2Ezd+cw== X-Google-Smtp-Source: APXvYqxOs6t5EkKgvEWjD7rrjNqKe8sOB/iM2OOIHLc6MdeN0UYq6BlK2fRbbGZ4K81M38Sudfc1UQ== X-Received: by 2002:a5d:400c:: with SMTP id n12mr6011698wrp.31.1553254117122; Fri, 22 Mar 2019 04:28:37 -0700 (PDT) Received: from redhat.com ([195.39.71.253]) by smtp.gmail.com with ESMTPSA id k9sm4549681wru.55.2019.03.22.04.28.34 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Fri, 22 Mar 2019 04:28:35 -0700 (PDT) Date: Fri, 22 Mar 2019 07:28:33 -0400 From: "Michael S. Tsirkin" To: Eric Dumazet Cc: "David S . Miller" , netdev , Soheil Hassas Yeganeh , Willem de Bruijn , Florian Westphal , Tom Herbert , Eric Dumazet Subject: Re: [PATCH v2 net-next 0/3] tcp: add rx/tx cache to reduce lock contention Message-ID: <20190322072802-mutt-send-email-mst@kernel.org> References: <20190322001444.182463-1-edumazet@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190322001444.182463-1-edumazet@google.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org On Thu, Mar 21, 2019 at 05:14:41PM -0700, Eric Dumazet wrote: > On hosts with many cpus we can observe a very serious contention > on spinlocks used in mm slab layer. > > The following can happen quite often : > > 1) TX path > sendmsg() allocates one (fclone) skb on CPU A, sends a clone. > ACK is received on CPU B, and consumes the skb that was in the retransmit > queue. > > 2) RX path > network driver allocates skb on CPU C > recvmsg() happens on CPU D, freeing the skb after it has been delivered > to user space. > > In both cases, we are hitting the asymetric alloc/free pattern > for which slab has to drain alien caches. At 8 Mpps per second, > this represents 16 Mpps alloc/free per second and has a huge penalty. > > In an interesting experiment, I tried to use a single kmem_cache for all the skbs > (in skb_init() : skbuff_fclone_cache = skbuff_head_cache = > kmem_cache_create("skbuff_fclone_cache", sizeof(struct sk_buff_fclones),); > qnd most of the contention disappeared, since cpus could better use > their local slab per-cpu cache. > > But we can do actually better, in the following patches. > > TX : at ACK time, no longer free the skb but put it back in a tcp socket cache, > so that next sendmsg() can reuse it immediately. > > RX : at recvmsg() time, do not free the skb but put it in a tcp socket cache > so that it can be freed by the cpu feeding the incoming packets in BH. > > This increased the performance of small RPC benchmark by about 10 % on a host > with 112 hyperthreads. > > v2 : - Solved a race condition : sk_stream_alloc_skb() to make sure the prior > clone has been freed. > - Really test rps_needed in sk_eat_skb() as claimed. > - Fixed rps_needed use in drivers/net/tun.c Just a thought: would it make sense to flush the cache in enter_memory_pressure? > Eric Dumazet (3): > net: convert rps_needed and rfs_needed to new static branch api > tcp: add one skb cache for tx > tcp: add one skb cache for rx > > drivers/net/tun.c | 2 +- > include/linux/netdevice.h | 4 +-- > include/net/sock.h | 13 ++++++++- > net/core/dev.c | 10 +++---- > net/core/net-sysfs.c | 4 +-- > net/core/sysctl_net_core.c | 8 +++--- > net/ipv4/af_inet.c | 4 +++ > net/ipv4/tcp.c | 54 +++++++++++++++++++------------------- > net/ipv4/tcp_ipv4.c | 11 ++++++-- > net/ipv6/tcp_ipv6.c | 12 ++++++--- > 10 files changed, 75 insertions(+), 47 deletions(-) > > -- > 2.21.0.225.g810b269d1ac-goog