From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6F1DC3DA5D for ; Fri, 19 Jul 2024 18:07:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 651686B0082; Fri, 19 Jul 2024 14:07:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 58E376B008C; Fri, 19 Jul 2024 14:07:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2C99B6B0088; Fri, 19 Jul 2024 14:07:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 010C66B0083 for ; Fri, 19 Jul 2024 14:07:46 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id A78541204DD for ; Fri, 19 Jul 2024 18:07:46 +0000 (UTC) X-FDA: 82357285332.29.BE8B27F Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by imf02.hostedemail.com (Postfix) with ESMTP id 469498001F for ; Fri, 19 Jul 2024 18:07:43 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=none; spf=pass (imf02.hostedemail.com: domain of jonathan.cameron@huawei.com designates 185.176.79.56 as permitted sender) smtp.mailfrom=jonathan.cameron@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1721412443; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=M5u7HJuZ7RtTc4sMmhstlS+NdPrTL/+JJQQibPcJby8=; b=qMYb0nuQEbG7FC86yQH21HM9O8q0v5oJAbJCTBUO+XCgeQwxuFyKBIVIQMGCUfUBcMCFjW CPVwbuyuluMUqas/UbbpfK9PXgcqX5Dy7uT+hkvpxtPy6drGkjpGfTlEmU25xLF2sxe8q2 2ja2aE6eJ9Z+VZ4NqEoIlDQC2GCKdzU= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=none; spf=pass (imf02.hostedemail.com: domain of jonathan.cameron@huawei.com designates 185.176.79.56 as permitted sender) smtp.mailfrom=jonathan.cameron@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1721412443; a=rsa-sha256; cv=none; b=KFcDL5INhqI7aaqkDQHqQLDfGv1ar70jBCetQyYxC7ngfpmBvAHEtxz1z5IVZASqAuFwIS lv04kmE0599CNGD30KYNjKMDmP3yX3vhuuXWqot/MC975suMvP+9nujX1KxLPKgQMOeuP2 mPjMyH1H38mweR8VJxSo9rvE8t7rW7w= Received: from mail.maildlp.com (unknown [172.18.186.31]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4WQczd6dCfz6J9r5; Sat, 20 Jul 2024 02:05:49 +0800 (CST) Received: from lhrpeml500005.china.huawei.com (unknown [7.191.163.240]) by mail.maildlp.com (Postfix) with ESMTPS id 9EC131408FE; Sat, 20 Jul 2024 02:07:14 +0800 (CST) Received: from localhost (10.48.157.16) by lhrpeml500005.china.huawei.com (7.191.163.240) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Fri, 19 Jul 2024 19:07:13 +0100 Date: Fri, 19 Jul 2024 19:07:12 +0100 From: Jonathan Cameron To: Mike Rapoport CC: , Alexander Gordeev , Andreas Larsson , "Andrew Morton" , Arnd Bergmann , "Borislav Petkov" , Catalin Marinas , Christophe Leroy , Dan Williams , Dave Hansen , David Hildenbrand , "David S. Miller" , Greg Kroah-Hartman , Heiko Carstens , Huacai Chen , Ingo Molnar , Jiaxun Yang , "John Paul Adrian Glaubitz" , Michael Ellerman , Palmer Dabbelt , "Rafael J. Wysocki" , Rob Herring , "Thomas Bogendoerfer" , Thomas Gleixner , Vasily Gorbik , Will Deacon , , , , , , , , , , , , , , , Subject: Re: [PATCH 15/17] mm: make numa_memblks more self-contained Message-ID: <20240719190712.00001307@Huawei.com> In-Reply-To: <20240716111346.3676969-16-rppt@kernel.org> References: <20240716111346.3676969-1-rppt@kernel.org> <20240716111346.3676969-16-rppt@kernel.org> Organization: Huawei Technologies Research and Development (UK) Ltd. X-Mailer: Claws Mail 4.1.0 (GTK 3.24.33; x86_64-w64-mingw32) MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.48.157.16] X-ClientProxiedBy: lhrpeml500001.china.huawei.com (7.191.163.213) To lhrpeml500005.china.huawei.com (7.191.163.240) X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 469498001F X-Stat-Signature: ets45hor4yynbqinkmewtkr7kz9ysfnz X-Rspam-User: X-HE-Tag: 1721412463-945827 X-HE-Meta: U2FsdGVkX19UfGNGD0tUxoeFABQBVISmGbaUp8vtuF0EBjoABacv2ydPPeOhIRxw2aOEwQH4+6dsUy4pqxDdAK0GQXerua18Z6UElnCxjE24tCybHl2OM3MRoIy0pIbjTs+oe/qPAcArFtDQtbb+g+JwLD4uzN56VA76NOluIkrqpq4SamZK8SgARobbJ5covZjEBOGSRPLjBXItqLo/n/RA8TacICUeiCalfHJ0f/9nR8NYvobjw5nGDg33QomLOqRdLKR3tEiZZnzpqbCuFzbb5tedetPJJBDnpAFr2J1iTFeGfjUlOSIF1bLUO1pjokk2KDN8Q3PspdLYpvjh4y4KZaj61Gy7u2JjOgN/D3vARt041svRKJ2gTL86eyksTbg6K+LBG0hbXqArKnKBvE4UOplbS2vXja8jcRswyAl7mOh9rWCwslcicJWaCDxlxx3Lp6H47MjjnTUe9e4VM9u/sPC2dC8qItRp/tTwNcW7WItkojCirt5Dtlwxz5g+zOoUVfOT9PUs0o7EoN2wN0coMiKih4Hk2fCh8kXU3Ol60amSeRdlHUoJszn4bpCRQa6lPnl8kXFxxPSSBYTwMJEhHQ81DaF0Pxn/2Ar4ZQL+S2E/MIN1LH45xxOiPUCIt816Hrr5uz1lO/eJNoeAtWjMm9FG9+F5qHJkzH+FsmqwWwcE1NvgMssgSv4V7MTVKUvsO58tCZ2bPjnFYhSxT/lK7nvTow2jYOuiaMd2438HlNJ/AzVUBefFnmPUtt1oF/lWsxF0HoNcrXqPADRYe/CaYufVpo14eUrSDjQEpSTFOdjtBYhusEEhg6gBjOyH30lT3YX2G02zDh58VPhez8QUnOINqSB3bXmbLgRksRflMTSd7U71eguBl9Z4a6LDcJEoVhyhVlY6bvGx+w7NZn+LWEhMUQIAFFd1qSpdjup6p0xXbGeSpYBoZ3/K0a51CeM34P0JUBPjRYME8Lb DhRYVOjr vUYCD3A8eFqg+Do4p0DKUj7hLM9+QeGdpM14CYLmyzttp3d05QbzPXMKHOLpyc1+dRZlZHe3vrvuwXxyM58nwPnDAm5DM91URJGwoy9r18cjDzDHMHEDpvDvG5xl8nx5Ul3GBeMoqjbJy+GyT/XBQpy5EbC2osLxdKk4DFFmi4oPbRYKcHf1lwB9vcLmvV3rKW0SSKOq5GdWzOVL5hZGh6cGeV3PR5MWf0Cp68iLwsrWyo9LmlxWm5foWGTX1NSA+id2vs/R9N0sP/XPHJD+R03hZ5SF9I+DyRlcAqxqt16K1K9S6uq+3EReH2xjw/6lRHjztLbImkasTJ728iq//rvj7nTKqaeCipPsQ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, 16 Jul 2024 14:13:44 +0300 Mike Rapoport wrote: > From: "Mike Rapoport (Microsoft)" > > Introduce numa_memblks_init() and move some code around to avoid several > global variables in numa_memblks. Hi Mike, Adding the effectively always on memblock_force_top_down deserves a comment on why. I assume because you are going to do something with it later? There also seems to be more going on in here such as the change to get_pfn_range_for_nid() Perhaps break this up so each change can have an explanation. > > Signed-off-by: Mike Rapoport (Microsoft) > --- > arch/x86/mm/numa.c | 53 ++++--------------------- > include/linux/numa_memblks.h | 9 +---- > mm/numa_memblks.c | 77 +++++++++++++++++++++++++++--------- > 3 files changed, 68 insertions(+), 71 deletions(-) > > diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c > index 3848e68d771a..16bc703c9272 100644 > --- a/arch/x86/mm/numa.c > +++ b/arch/x86/mm/numa.c > @@ -115,30 +115,19 @@ void __init setup_node_to_cpumask_map(void) > pr_debug("Node to cpumask map for %u nodes\n", nr_node_ids); > } > > -static int __init numa_register_memblks(struct numa_meminfo *mi) > +static int __init numa_register_nodes(void) > { > - int i, nid, err; > - > - err = numa_register_meminfo(mi); > - if (err) > - return err; > + int nid; > > if (!memblock_validate_numa_coverage(SZ_1M)) > return -EINVAL; > > /* Finally register nodes. */ > for_each_node_mask(nid, node_possible_map) { > - u64 start = PFN_PHYS(max_pfn); > - u64 end = 0; > - > - for (i = 0; i < mi->nr_blks; i++) { > - if (nid != mi->blk[i].nid) > - continue; > - start = min(mi->blk[i].start, start); > - end = max(mi->blk[i].end, end); > - } > + unsigned long start_pfn, end_pfn; > > - if (start >= end) > + get_pfn_range_for_nid(nid, &start_pfn, &end_pfn); It's not immediately obvious to me that this code is equivalent so I'd prefer it in a separate patch with some description of why it is a valid change. > + if (start_pfn >= end_pfn) > continue; > > alloc_node_data(nid); > @@ -178,39 +167,11 @@ static int __init numa_init(int (*init_func)(void)) > for (i = 0; i < MAX_LOCAL_APIC; i++) > set_apicid_to_node(i, NUMA_NO_NODE); > > - nodes_clear(numa_nodes_parsed); > - nodes_clear(node_possible_map); > - nodes_clear(node_online_map); > - memset(&numa_meminfo, 0, sizeof(numa_meminfo)); > - WARN_ON(memblock_set_node(0, ULLONG_MAX, &memblock.memory, > - NUMA_NO_NODE)); > - WARN_ON(memblock_set_node(0, ULLONG_MAX, &memblock.reserved, > - NUMA_NO_NODE)); > - /* In case that parsing SRAT failed. */ > - WARN_ON(memblock_clear_hotplug(0, ULLONG_MAX)); > - numa_reset_distance(); > - > - ret = init_func(); > - if (ret < 0) > - return ret; > - > - /* > - * We reset memblock back to the top-down direction > - * here because if we configured ACPI_NUMA, we have > - * parsed SRAT in init_func(). It is ok to have the > - * reset here even if we did't configure ACPI_NUMA > - * or acpi numa init fails and fallbacks to dummy > - * numa init. > - */ > - memblock_set_bottom_up(false); > - > - ret = numa_cleanup_meminfo(&numa_meminfo); > + ret = numa_memblks_init(init_func, /* memblock_force_top_down */ true); The comment in parameter list seems unnecessary. Maybe add a comment above the call instead if need to call that out? > if (ret < 0) > return ret; > > - numa_emulation(&numa_meminfo, numa_distance_cnt); > - > - ret = numa_register_memblks(&numa_meminfo); > + ret = numa_register_nodes(); > if (ret < 0) > return ret; > > diff --git a/mm/numa_memblks.c b/mm/numa_memblks.c > index e0039549aaac..640f3a3ce0ee 100644 > --- a/mm/numa_memblks.c > +++ b/mm/numa_memblks.c > @@ -7,13 +7,27 @@ > #include > #include > > +/* > + * Set nodes, which have memory in @mi, in *@nodemask. > + */ > +static void __init numa_nodemask_from_meminfo(nodemask_t *nodemask, > + const struct numa_meminfo *mi) > +{ > + int i; > + > + for (i = 0; i < ARRAY_SIZE(mi->blk); i++) > + if (mi->blk[i].start != mi->blk[i].end && > + mi->blk[i].nid != NUMA_NO_NODE) > + node_set(mi->blk[i].nid, *nodemask); > +} The code move doesn't have an obvious purpose. Maybe call that out in the patch description if it is needed for a future patch. Or do it in two goes so first just adds the static, 2nd shuffles the code. > > /** > * numa_reset_distance - Reset NUMA distance table > @@ -287,20 +301,6 @@ int __init numa_cleanup_meminfo(struct numa_meminfo *mi) > return 0; > } > > -/* > - * Set nodes, which have memory in @mi, in *@nodemask. > - */ > -void __init numa_nodemask_from_meminfo(nodemask_t *nodemask, > - const struct numa_meminfo *mi) > -{ > - int i; > - > - for (i = 0; i < ARRAY_SIZE(mi->blk); i++) > - if (mi->blk[i].start != mi->blk[i].end && > - mi->blk[i].nid != NUMA_NO_NODE) > - node_set(mi->blk[i].nid, *nodemask); > -} > - > /* > * Mark all currently memblock-reserved physical memory (which covers the > * kernel's own memory ranges) as hot-unswappable. > @@ -368,7 +368,7 @@ static void __init numa_clear_kernel_node_hotplug(void) > } > } > > -int __init numa_register_meminfo(struct numa_meminfo *mi) > +static int __init numa_register_meminfo(struct numa_meminfo *mi) > { > int i; > > @@ -412,6 +412,47 @@ int __init numa_register_meminfo(struct numa_meminfo *mi) > return 0; > } > > +int __init numa_memblks_init(int (*init_func)(void), > + bool memblock_force_top_down) > +{ > + int ret; > + > + nodes_clear(numa_nodes_parsed); > + nodes_clear(node_possible_map); > + nodes_clear(node_online_map); > + memset(&numa_meminfo, 0, sizeof(numa_meminfo)); > + WARN_ON(memblock_set_node(0, ULLONG_MAX, &memblock.memory, > + NUMA_NO_NODE)); > + WARN_ON(memblock_set_node(0, ULLONG_MAX, &memblock.reserved, > + NUMA_NO_NODE)); > + /* In case that parsing SRAT failed. */ > + WARN_ON(memblock_clear_hotplug(0, ULLONG_MAX)); > + numa_reset_distance(); > + > + ret = init_func(); > + if (ret < 0) > + return ret; > + > + /* > + * We reset memblock back to the top-down direction > + * here because if we configured ACPI_NUMA, we have > + * parsed SRAT in init_func(). It is ok to have the > + * reset here even if we did't configure ACPI_NUMA > + * or acpi numa init fails and fallbacks to dummy > + * numa init. > + */ > + if (memblock_force_top_down) > + memblock_set_bottom_up(false); > + > + ret = numa_cleanup_meminfo(&numa_meminfo); > + if (ret < 0) > + return ret; > + > + numa_emulation(&numa_meminfo, numa_distance_cnt); > + > + return numa_register_meminfo(&numa_meminfo); > +} > + > static int __init cmp_memblk(const void *a, const void *b) > { > const struct numa_memblk *ma = *(const struct numa_memblk **)a;