From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on archive.lwn.net X-Spam-Level: X-Spam-Status: No, score=-5.8 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.2 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by archive.lwn.net (Postfix) with ESMTP id 3885F7D2F0 for ; Wed, 28 Aug 2019 14:02:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726400AbfH1OCV (ORCPT ); Wed, 28 Aug 2019 10:02:21 -0400 Received: from mail-pg1-f194.google.com ([209.85.215.194]:35631 "EHLO mail-pg1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726437AbfH1OCV (ORCPT ); Wed, 28 Aug 2019 10:02:21 -0400 Received: by mail-pg1-f194.google.com with SMTP id n4so1544243pgv.2 for ; Wed, 28 Aug 2019 07:02:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=sm8uUWc7t4LJLgsQULbihUhF9V5jwzZ1lbrcWrp+6/Y=; b=LTsE2TEAEu36pNiCf5zLFUWz7duzQrtYjrLmIUx2ZbVSZOCITzducoib1LMjg/BIf4 oJqx15OYLZFXgr7KzRetLbcjfICBjA01bw54V3junFR6MfHddSC7P1eBrBfJfMNi+SBf mORsTDF51mM0TBQtpQ8w5eCBI9FqSNvxHzaPI= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=sm8uUWc7t4LJLgsQULbihUhF9V5jwzZ1lbrcWrp+6/Y=; b=GU0MtLIpgh380FvCmiudsHDkknpry6IQtz8s/VzmyBDhsalO6INqlGN4XvOwOiw7mN 9VYAlmGw0+51ZJkNz2fLT3FJvjbppmMvIuGUOVFqgP0ajTd9KlJGfLgYEKHHyHo3ujxP hsDAc2bh+/oWOOcn1WcspppRkS3A9tH6c0VHI/op3vd1w3awIaOKTAXp9FaQBuVmhZkw fTT6utN7PNvXex53L5V894PPP3McQKZDqd4AMniG6pADTyhRGVom++erQrsCvZBlNBtx EMF3T4ucVBG7JxWfI4mztjQY4TUL5XQkQ+x7hlsBeJnGI2QSOkJ58THHghk2WVoTqXBP y/ag== X-Gm-Message-State: APjAAAVR34FL8hRu3hy2ZliPZtj2etQ2hbqbVfb54RrFriZ7Y9feypTf FDfFihn/2eOAYmpvCFW1uUVXIw== X-Google-Smtp-Source: APXvYqzXCFvpg5AYsMC9v6QuYVpE0bSIdWbYXYKu8unBPaJtbRdq/x5GWtvrYcZsFRUY3Zo34auRbQ== X-Received: by 2002:a17:90a:734a:: with SMTP id j10mr4413753pjs.63.1567000940536; Wed, 28 Aug 2019 07:02:20 -0700 (PDT) Received: from localhost ([2620:15c:6:12:9c46:e0da:efbf:69cc]) by smtp.gmail.com with ESMTPSA id z63sm2865783pfb.163.2019.08.28.07.02.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Aug 2019 07:02:19 -0700 (PDT) Date: Wed, 28 Aug 2019 10:02:18 -0400 From: Joel Fernandes To: Boqun Feng Cc: linux-kernel@vger.kernel.org, "Paul E . McKenney" , byungchul.park@lge.com, Josh Triplett , Lai Jiangshan , linux-doc@vger.kernel.org, Mathieu Desnoyers , rcu@vger.kernel.org, Steven Rostedt , kernel-team@android.com Subject: Re: [PATCH 2/5] rcu/tree: Add multiple in-flight batches of kfree_rcu work Message-ID: <20190828140218.GB230957@google.com> References: <5d657e35.1c69fb81.54250.01de@mx.google.com> <20190827235253.GB30253@tardis> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190827235253.GB30253@tardis> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-doc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-doc@vger.kernel.org On Wed, Aug 28, 2019 at 07:52:53AM +0800, Boqun Feng wrote: > Hi Joel, > > On Tue, Aug 27, 2019 at 03:01:56PM -0400, Joel Fernandes (Google) wrote: > > During testing, it was observed that amount of memory consumed due > > kfree_rcu() batching is 300-400MB. Previously we had only a single > > head_free pointer pointing to the list of rcu_head(s) that are to be > > freed after a grace period. Until this list is drained, we cannot queue > > any more objects on it since such objects may not be ready to be > > reclaimed when the worker thread eventually gets to drainin g the > > head_free list. > > > > We can do better by maintaining multiple lists as done by this patch. > > Testing shows that memory consumption came down by around 100-150MB with > > just adding another list. Adding more than 1 additional list did not > > show any improvement. > > > > Suggested-by: Paul E. McKenney > > Signed-off-by: Joel Fernandes (Google) > > --- > > kernel/rcu/tree.c | 64 +++++++++++++++++++++++++++++++++-------------- > > 1 file changed, 45 insertions(+), 19 deletions(-) > > > > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c > > index 4f7c3096d786..9b9ae4db1c2d 100644 > > --- a/kernel/rcu/tree.c > > +++ b/kernel/rcu/tree.c > > @@ -2688,28 +2688,38 @@ EXPORT_SYMBOL_GPL(call_rcu); > > > > /* Maximum number of jiffies to wait before draining a batch. */ > > #define KFREE_DRAIN_JIFFIES (HZ / 50) > > +#define KFREE_N_BATCHES 2 > > + > > +struct kfree_rcu_work { > > + /* The rcu_work node for queuing work with queue_rcu_work(). The work > > + * is done after a grace period. > > + */ > > + struct rcu_work rcu_work; > > + > > + /* The list of objects that have now left ->head and are queued for > > + * freeing after a grace period. > > + */ > > + struct rcu_head *head_free; > > + > > + struct kfree_rcu_cpu *krcp; > > +}; > > +static DEFINE_PER_CPU(__typeof__(struct kfree_rcu_work)[KFREE_N_BATCHES], krw); > > > > Why not > > static DEFINE_PER_CPU(struct kfree_rcu_work[KFREE_N_BATCHES], krw); > > here? Am I missing something? Yes, that's better. > Further, given "struct kfree_rcu_cpu" is only for defining percpu > variables, how about orginazing the data structure like: > > struct kfree_rcu_cpu { > ... > struct kfree_rcu_work krws[KFREE_N_BATCHES]; > ... > } > > This could save one pointer in kfree_rcu_cpu, and I think it provides > better cache locality for accessing _cpu and _work on the same cpu. > > Thoughts? Yes, that's better. Thanks, Boqun! Following is the diff which I will fold into this patch: ---8<----------------------- diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index b3259306b7a5..fac5ae96d8b1 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -2717,7 +2717,6 @@ struct kfree_rcu_work { struct kfree_rcu_cpu *krcp; }; -static DEFINE_PER_CPU(__typeof__(struct kfree_rcu_work)[KFREE_N_BATCHES], krw); /* * Maximum number of kfree(s) to batch, if this limit is hit then the batch of @@ -2731,7 +2730,7 @@ struct kfree_rcu_cpu { struct rcu_head *head; /* Pointer to the per-cpu array of kfree_rcu_work structures */ - struct kfree_rcu_work *krwp; + struct kfree_rcu_work krw_arr[KFREE_N_BATCHES]; /* Protect concurrent access to this structure and kfree_rcu_work. */ spinlock_t lock; @@ -2800,8 +2799,8 @@ static inline bool queue_kfree_rcu_work(struct kfree_rcu_cpu *krcp) lockdep_assert_held(&krcp->lock); while (i < KFREE_N_BATCHES) { - if (!krcp->krwp[i].head_free) { - krwp = &(krcp->krwp[i]); + if (!krcp->krw_arr[i].head_free) { + krwp = &(krcp->krw_arr[i]); break; } i++; @@ -3780,13 +3779,11 @@ static void __init kfree_rcu_batch_init(void) for_each_possible_cpu(cpu) { struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu); - struct kfree_rcu_work *krwp = &(per_cpu(krw, cpu)[0]); int i = KFREE_N_BATCHES; spin_lock_init(&krcp->lock); - krcp->krwp = krwp; while (i--) - krwp[i].krcp = krcp; + krcp->krw_arr[i].krcp = krcp; INIT_DELAYED_WORK(&krcp->monitor_work, kfree_rcu_monitor); } }