From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755340AbbINLis (ORCPT ); Mon, 14 Sep 2015 07:38:48 -0400 Received: from e23smtp08.au.ibm.com ([202.81.31.141]:46796 "EHLO e23smtp08.au.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752158AbbINLir (ORCPT ); Mon, 14 Sep 2015 07:38:47 -0400 X-Helo: d23dlp01.au.ibm.com X-MailFrom: raghavendra.kt@linux.vnet.ibm.com X-RcptTo: linux-kernel@vger.kernel.org Message-ID: <55F6B1F3.1010702@linux.vnet.ibm.com> Date: Mon, 14 Sep 2015 17:09:31 +0530 From: Raghavendra K T Organization: IBM User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130625 Thunderbird/17.0.7 MIME-Version: 1.0 To: Vladimir Davydov CC: benh@kernel.crashing.org, paulus@samba.org, mpe@ellerman.id.au, anton@samba.org, akpm@linux-foundation.org, nacc@linux.vnet.ibm.com, gkurz@linux.vnet.ibm.com, zhong@linux.vnet.ibm.com, grant.likely@linaro.org, nikunj@linux.vnet.ibm.com, linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH 1/2] mm: Replace nr_node_ids for loop with for_each_node in list lru References: <1441737107-23103-1-git-send-email-raghavendra.kt@linux.vnet.ibm.com> <1441737107-23103-2-git-send-email-raghavendra.kt@linux.vnet.ibm.com> <20150914090010.GB30743@esperanza> In-Reply-To: <20150914090010.GB30743@esperanza> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 15091411-0029-0000-0000-00000229DB8F Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 09/14/2015 02:30 PM, Vladimir Davydov wrote: > Hi, > > On Wed, Sep 09, 2015 at 12:01:46AM +0530, Raghavendra K T wrote: >> The functions used in the patch are in slowpath, which gets called >> whenever alloc_super is called during mounts. >> >> Though this should not make difference for the architectures with >> sequential numa node ids, for the powerpc which can potentially have >> sparse node ids (for e.g., 4 node system having numa ids, 0,1,16,17 >> is common), this patch saves some unnecessary allocations for >> non existing numa nodes. >> >> Even without that saving, perhaps patch makes code more readable. > > Do I understand correctly that node 0 must always be in > node_possible_map? I ask, because we currently test > lru->node[0].memcg_lrus to determine if the list is memcg aware. > Yes, node 0 is always there. So it should not be a problem. >> >> Signed-off-by: Raghavendra K T >> --- >> mm/list_lru.c | 23 +++++++++++++++-------- >> 1 file changed, 15 insertions(+), 8 deletions(-) >> >> diff --git a/mm/list_lru.c b/mm/list_lru.c >> index 909eca2..5a97f83 100644 >> --- a/mm/list_lru.c >> +++ b/mm/list_lru.c >> @@ -377,7 +377,7 @@ static int memcg_init_list_lru(struct list_lru *lru, bool memcg_aware) >> { >> int i; >> >> - for (i = 0; i < nr_node_ids; i++) { >> + for_each_node(i) { >> if (!memcg_aware) >> lru->node[i].memcg_lrus = NULL; > > So, we don't explicitly initialize memcg_lrus for nodes that are not in > node_possible_map. That's OK, because we allocate lru->node using > kzalloc. However, this partial nullifying in case !memcg_aware looks > confusing IMO. Let's drop it, I mean something like this: Yes, you are right. and we do not have to have memcg_aware check inside for loop too. Will change as per your suggestion and send V2. Thanks for the review. > > static int memcg_init_list_lru(struct list_lru *lru, bool memcg_aware) > { > int i; > > if (!memcg_aware) > return 0; > > for_each_node(i) { > if (memcg_init_list_lru_node(&lru->node[i])) > goto fail; > } >