From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1118FC3DA5D for ; Mon, 22 Jul 2024 08:07:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9ABDE6B0085; Mon, 22 Jul 2024 04:07:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 95B256B0088; Mon, 22 Jul 2024 04:07:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 822206B0089; Mon, 22 Jul 2024 04:07:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 63C396B0085 for ; Mon, 22 Jul 2024 04:07:12 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 14C171415D7 for ; Mon, 22 Jul 2024 08:07:12 +0000 (UTC) X-FDA: 82366658304.07.D5F33FB Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf09.hostedemail.com (Postfix) with ESMTP id C4B5D140008 for ; Mon, 22 Jul 2024 08:07:09 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="Q5Dp/726"; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf09.hostedemail.com: domain of rppt@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1721635579; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=IO1ypQhPJJGVoWKSnsnPOVclcRMGA5MlPVSimGsxnd0=; b=6ldMtYNyrpIwHMgspy7+JNdvG3twN0TWwPmqGB8XwrWSC+10pDATO4qiGrdmjmIcFMMay9 7UkdkVfmG1brvgBcC1JZIr6tM3KvRIlPpwqP+d+M5KimvK+Hd1sKKlBx4AYhhmFxsf3Wq7 l7pkoUUCXbj9s1UEQOPgm0GgS2BAQKM= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1721635579; a=rsa-sha256; cv=none; b=gSr9QqhbxEaor0HYa4gOSefyAIL4cEg+o1MyULnP12cvpS15s/h/5NXuFyEuyBMHgda3vD ZKwperDGe84b2aK6CK9Za0htci3j470DHSuEWxOk3SV1pcKSINir+lUejaTjtXcoE75yHb gQU4Mz9MN4SJEcDbHpVMpWr9EgiCQ78= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="Q5Dp/726"; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf09.hostedemail.com: domain of rppt@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=rppt@kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 11541CE0B18; Mon, 22 Jul 2024 08:07:06 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C7D33C32782; Mon, 22 Jul 2024 08:06:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1721635625; bh=H9F6V5OYzjr4gKPKPFe3Wqpvyc1OTcMklndZSekeRHA=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=Q5Dp/726iOiXcRsY4FjpIdOLw46JsxdnylRfDnLOAnzjGf18EfC22uDduEG/56577 mF9VR0M7MePJHH6qducKTMLl4+3wJJgy2MXlYIyme/Z0oGQmEBaV2fnZN8cTUTZZvp e4AxGAr/yPA5Th2/BAQI9KKOEa1B0mXrWv1iZfa496Ragj+yYsX+mzlc5vma9rVTY9 zTTiTTmtETJN6g5cdK6hrsVEV55nEGUvysy6ROPM9ZclNIfLuk9SbEP9IB4BHpP+D6 6+f3Pr37n33T6UhUEbbN4W+dS5FVFyy9KfDtqZZwxaNb+ifX6zJtDsjsuk0MBnmqT3 FAlsWVe5OvzTA== Date: Mon, 22 Jul 2024 11:03:57 +0300 From: Mike Rapoport To: Jonathan Cameron Cc: linux-kernel@vger.kernel.org, Alexander Gordeev , Andreas Larsson , Andrew Morton , Arnd Bergmann , Borislav Petkov , Catalin Marinas , Christophe Leroy , Dan Williams , Dave Hansen , David Hildenbrand , "David S. Miller" , Greg Kroah-Hartman , Heiko Carstens , Huacai Chen , Ingo Molnar , Jiaxun Yang , John Paul Adrian Glaubitz , Michael Ellerman , Palmer Dabbelt , "Rafael J. Wysocki" , Rob Herring , Thomas Bogendoerfer , Thomas Gleixner , Vasily Gorbik , Will Deacon , linux-arm-kernel@lists.infradead.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-acpi@vger.kernel.org, linux-cxl@vger.kernel.org, nvdimm@lists.linux.dev, devicetree@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org Subject: Re: [PATCH 12/17] mm: introduce numa_memblks Message-ID: References: <20240716111346.3676969-1-rppt@kernel.org> <20240716111346.3676969-13-rppt@kernel.org> <20240719191647.000072f6@Huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240719191647.000072f6@Huawei.com> X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: C4B5D140008 X-Stat-Signature: upp8zurok5iopsy3kyewwnhqbcr9omjs X-Rspam-User: X-HE-Tag: 1721635629-868806 X-HE-Meta: U2FsdGVkX19DZxmdz7P6ywgpJhT0bCfI1X4a4SDRS/SVEU4BpdgKFqLlNsFdt1cORFIlwfZdEPbA+smYgCFmASa04KatZsU038bSBA5h7MyFb5zBub7qbwBL5po+PtMDzUTc+Uvknnc5TWGfWTpLcbq4b/jB55mk9Q5cFUm7C7lV5dVB9598HIjoBsK/4KvHQrCBloO5/eTEKRw06LFGFSAf/g18/d+YjLWyceLRowNiTVM4toLOxdPCo5N4UwWZKB54qbfcF4zBag4D79X7cKKO/eBYEJI6NCaQrugajxnZidDtP9Jthvp+vjxRk4RkGZAh68Iz0GHDjokTCboJ61DOW1C3oDWnJiWxkwdkvWxeRmGBtSbllNgG9mO++1eLeytlTjDT5L272brul+ey3++gCfxOBKjsjj5TM1ikGMpSZuRTvo8x5XeyODMkO2gk8YkLQ6vrNCkl2oVE/5oLCwzXCT9jrKR4N1X9c6zDaUCo2lkrCeTtCAUsCxt8QrEnZxiDyqdyC42hfzmMsRKorxAKEL3Brulr2+ssvkhcG3Q7gAwcN82fxq9TLjA9a4KDbP2WVbsJTlLm/Ab2qHVCnh43CQDPqtgfBClYlRyt6+X3hnOt90w1FMYpHO544Jx53IXoWL+y5yALTFTAP5tkSyjtCeF1hrPGlFy6qqRi8RfcMvgqcFiZYKu6wHrdfrO/A4oxleg1mUBWjk8Yjnsp6xREDRhBUlHgePRi6e77eqzjUmEq5zAUxJoTFXA54krAh+9k3IUBEXxFZ5miKN2k7MBnRrAt2fcnwL7Q8U1MvN0Pl86MX/IfuPOTwP2/EkAzXBzX4uOJVimTuj8AndVZ4GLDGquLVulEkBd0KxQ3VNMRUQEWeg979bxqvskUEZenSYhGBHpP5f39gEcSsQaiCvHXos0Vy/h/qaoxFKPdoXnjuVoiopMi4JbWsRx6g69EitbpXzd+vZH6QowYoTB oG54wFqf OMViq3klWHq/Gsc+fEiM9kTAlLsGNyntxyHo1hy0ajXC+Eam0tWeBYnUYrs20Ehyz3S7F43q+mvV6qnNLAax7DgUiPR9m0UqKcLOKr6eojvk0su8/8ILS2ZcY+bytupCAdFhbjMS3qWFlHsBCF2TB1YQ+JI4mT3Fuv6noShwFF8iTwlU9dgVpHQkCwQMFJDxqSDcqEFzBYUD67jgEm6p01PDDrXovXTnVMw2GtSqmeUWaVgWyBFCLkPS0i9ChKa4+cRmm9zDpeygbJlFm1OOkXmf1h7cA+oJ54ga/k8Y1qsKIevadpD3obD/ifzmEnwwvNSkoiUdJ6p+fZ/XlMHNDsoFRaX59ZKD8nb0H9g6JlAmO4AU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Jul 19, 2024 at 07:16:47PM +0100, Jonathan Cameron wrote: > On Tue, 16 Jul 2024 14:13:41 +0300 > Mike Rapoport wrote: > > > From: "Mike Rapoport (Microsoft)" > > > > Move code dealing with numa_memblks from arch/x86 to mm/ and add Kconfig > > options to let x86 select it in its Kconfig. > > > > This code will be later reused by arch_numa. > > > > No functional changes. > > > > Signed-off-by: Mike Rapoport (Microsoft) > Hi Mike, > > My only real concern in here is there are a few places where > the lifted code makes changes to memblocks that are x86 only today. > I need to do some more digging to work out if those are safe > in all cases. > > Jonathan > > > > > +/** > > + * numa_cleanup_meminfo - Cleanup a numa_meminfo > > + * @mi: numa_meminfo to clean up > > + * > > + * Sanitize @mi by merging and removing unnecessary memblks. Also check for > > + * conflicts and clear unused memblks. > > + * > > + * RETURNS: > > + * 0 on success, -errno on failure. > > + */ > > +int __init numa_cleanup_meminfo(struct numa_meminfo *mi) > > +{ > > + const u64 low = 0; > > Given always zero, why not just use that value inline? Actually it seems to me that it should be memblock_start_of_DRAM(). The blocks outside system memory are moved to numa_reserved_meminfo, so AFAIU on arm64/riscv such blocks can be below the RAM. > > + const u64 high = PFN_PHYS(max_pfn); > > + int i, j, k; > > + > > + /* first, trim all entries */ > > + for (i = 0; i < mi->nr_blks; i++) { > > + struct numa_memblk *bi = &mi->blk[i]; > > + > > + /* move / save reserved memory ranges */ > > + if (!memblock_overlaps_region(&memblock.memory, > > + bi->start, bi->end - bi->start)) { > > + numa_move_tail_memblk(&numa_reserved_meminfo, i--, mi); > > + continue; > > + } > > + > > + /* make sure all non-reserved blocks are inside the limits */ > > + bi->start = max(bi->start, low); > > + > > + /* preserve info for non-RAM areas above 'max_pfn': */ > > + if (bi->end > high) { > > + numa_add_memblk_to(bi->nid, high, bi->end, > > + &numa_reserved_meminfo); > > + bi->end = high; > > + } > > + > > + /* and there's no empty block */ > > + if (bi->start >= bi->end) > > + numa_remove_memblk_from(i--, mi); > > + } > > + > > + /* merge neighboring / overlapping entries */ > > + for (i = 0; i < mi->nr_blks; i++) { > > + struct numa_memblk *bi = &mi->blk[i]; > > + > > + for (j = i + 1; j < mi->nr_blks; j++) { > > + struct numa_memblk *bj = &mi->blk[j]; > > + u64 start, end; > > + > > + /* > > + * See whether there are overlapping blocks. Whine > > + * about but allow overlaps of the same nid. They > > + * will be merged below. > > + */ > > + if (bi->end > bj->start && bi->start < bj->end) { > > + if (bi->nid != bj->nid) { > > + pr_err("node %d [mem %#010Lx-%#010Lx] overlaps with node %d [mem %#010Lx-%#010Lx]\n", > > + bi->nid, bi->start, bi->end - 1, > > + bj->nid, bj->start, bj->end - 1); > > + return -EINVAL; > > + } > > + pr_warn("Warning: node %d [mem %#010Lx-%#010Lx] overlaps with itself [mem %#010Lx-%#010Lx]\n", > > + bi->nid, bi->start, bi->end - 1, > > + bj->start, bj->end - 1); > > + } > > + > > + /* > > + * Join together blocks on the same node, holes > > + * between which don't overlap with memory on other > > + * nodes. > > + */ > > + if (bi->nid != bj->nid) > > + continue; > > + start = min(bi->start, bj->start); > > + end = max(bi->end, bj->end); > > + for (k = 0; k < mi->nr_blks; k++) { > > + struct numa_memblk *bk = &mi->blk[k]; > > + > > + if (bi->nid == bk->nid) > > + continue; > > + if (start < bk->end && end > bk->start) > > + break; > > + } > > + if (k < mi->nr_blks) > > + continue; > > + pr_info("NUMA: Node %d [mem %#010Lx-%#010Lx] + [mem %#010Lx-%#010Lx] -> [mem %#010Lx-%#010Lx]\n", > > + bi->nid, bi->start, bi->end - 1, bj->start, > > + bj->end - 1, start, end - 1); > > + bi->start = start; > > + bi->end = end; > > + numa_remove_memblk_from(j--, mi); > > + } > > + } > > + > > + /* clear unused ones */ > > + for (i = mi->nr_blks; i < ARRAY_SIZE(mi->blk); i++) { > > + mi->blk[i].start = mi->blk[i].end = 0; > > + mi->blk[i].nid = NUMA_NO_NODE; > > + } > > + > > + return 0; > > +} > > ... > > > > +/* > > + * Mark all currently memblock-reserved physical memory (which covers the > > + * kernel's own memory ranges) as hot-unswappable. > > + */ > > +static void __init numa_clear_kernel_node_hotplug(void) > > This will be a change for non x86 architectures. 'should' be fine > but I'm not 100% sure. This function sets nid to memblock.reserved which does not change anything except the dump in debugfs and then uses the node info in memblock.reserve to clear MEMBLOCK_HOTPLUG from the regions in memblock.memory that contain the reserved memory because they cannot be hot(un)plugged anyway. > > +{ > > + nodemask_t reserved_nodemask = NODE_MASK_NONE; > > + struct memblock_region *mb_region; > > + int i; > > + > > + /* > > + * We have to do some preprocessing of memblock regions, to > > + * make them suitable for reservation. > > + * > > + * At this time, all memory regions reserved by memblock are > > + * used by the kernel, but those regions are not split up > > + * along node boundaries yet, and don't necessarily have their > > + * node ID set yet either. > > + * > > + * So iterate over all memory known to the x86 architecture, > > Comment needs an update at least given not x86 specific any more. Sure, will fix. > > + * and use those ranges to set the nid in memblock.reserved. > > + * This will split up the memblock regions along node > > + * boundaries and will set the node IDs as well. > > + */ > > + for (i = 0; i < numa_meminfo.nr_blks; i++) { > > + struct numa_memblk *mb = numa_meminfo.blk + i; > > + int ret; > > + > > + ret = memblock_set_node(mb->start, mb->end - mb->start, > > + &memblock.reserved, mb->nid); > > + WARN_ON_ONCE(ret); > > + } > > + > > + /* > > + * Now go over all reserved memblock regions, to construct a > > + * node mask of all kernel reserved memory areas. > > + * > > + * [ Note, when booting with mem=nn[kMG] or in a kdump kernel, > > + * numa_meminfo might not include all memblock.reserved > > + * memory ranges, because quirks such as trim_snb_memory() > > + * reserve specific pages for Sandy Bridge graphics. ] > > + */ > > + for_each_reserved_mem_region(mb_region) { > > + int nid = memblock_get_region_node(mb_region); > > + > > + if (nid != MAX_NUMNODES) > > + node_set(nid, reserved_nodemask); > > + } > > + > > + /* > > + * Finally, clear the MEMBLOCK_HOTPLUG flag for all memory > > + * belonging to the reserved node mask. > > + * > > + * Note that this will include memory regions that reside > > + * on nodes that contain kernel memory - entire nodes > > + * become hot-unpluggable: > > + */ > > + for (i = 0; i < numa_meminfo.nr_blks; i++) { > > + struct numa_memblk *mb = numa_meminfo.blk + i; > > + > > + if (!node_isset(mb->nid, reserved_nodemask)) > > + continue; > > + > > + memblock_clear_hotplug(mb->start, mb->end - mb->start); > > + } > > +} > -- Sincerely yours, Mike.