From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5C68ECE7A81 for ; Mon, 25 Sep 2023 07:22:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F12D28D0009; Mon, 25 Sep 2023 03:22:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EC4118D0001; Mon, 25 Sep 2023 03:22:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D8B5C8D0009; Mon, 25 Sep 2023 03:22:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id C92A38D0001 for ; Mon, 25 Sep 2023 03:22:29 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id A36DBB3D7F for ; Mon, 25 Sep 2023 07:22:29 +0000 (UTC) X-FDA: 81274276818.02.4EF8A23 Received: from out-208.mta1.migadu.com (out-208.mta1.migadu.com [95.215.58.208]) by imf29.hostedemail.com (Postfix) with ESMTP id ECF52120014 for ; Mon, 25 Sep 2023 07:22:27 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=BXlLx+C4; spf=pass (imf29.hostedemail.com: domain of yajun.deng@linux.dev designates 95.215.58.208 as permitted sender) smtp.mailfrom=yajun.deng@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1695626548; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=u7hdNMC4UOcW2t7uwYGgTfN9WDCRbod/rNSZ3lnlhtw=; b=sVKl7DJRVxQGxw0MQTNhQcRb6Rel47dKWk//4067oryrw2d6ygFQQ8k/rJmuvwYWf4v+WT ePVmpfMtmMSJkbKdpg6+Ytd7NsuY1PD5N35zgSiJfk4cM1SC0mKaGgujuY/Hwcd6UnLvQE rIWQ4//vRZOG0t7npT3pRYZOO4Owilk= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=BXlLx+C4; spf=pass (imf29.hostedemail.com: domain of yajun.deng@linux.dev designates 95.215.58.208 as permitted sender) smtp.mailfrom=yajun.deng@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1695626548; a=rsa-sha256; cv=none; b=ToORY9N2ovf8A21isYZHqKuSafObzYgtZh/aOM/6LUUrPnpyilPWyW3PlV9nkUdns2qbW4 WzjXNwT1UUf38Un2PfWeMchFGMdWejSQ6LsV0GjdcDc3QH6iMZFvsOiLNnVoqf7TH5R9fC tnTPKK8EGCja8uXDlfIx4wHIWp+QHGk= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1695626546; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=u7hdNMC4UOcW2t7uwYGgTfN9WDCRbod/rNSZ3lnlhtw=; b=BXlLx+C48qOjacmeTMWRU5jZhJUdcFTtpDZuI2ajVt4e0J0SA08MBhucH4QPbVjAKyPmun h0Z0jblLvBQUT5O2gVLrp8WKrOnPSG/ynQtMXnmkDDf/KGlSUKhtGechJ/dINHbRNQC22I mgTeKnYclWq7yaIPfhAd3gFi7RMWn3E= From: Yajun Deng To: akpm@linux-foundation.org, rppt@kernel.org Cc: mike.kravetz@oracle.com, muchun.song@linux.dev, willy@infradead.org, david@redhat.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yajun Deng Subject: [PATCH v2 2/2] mm: Init page count in reserve_bootmem_region when MEMINIT_EARLY Date: Mon, 25 Sep 2023 15:21:50 +0800 Message-Id: <20230925072150.386880-3-yajun.deng@linux.dev> In-Reply-To: <20230925072150.386880-1-yajun.deng@linux.dev> References: <20230925072150.386880-1-yajun.deng@linux.dev> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Migadu-Flow: FLOW_OUT X-Rspamd-Queue-Id: ECF52120014 X-Rspam-User: X-Stat-Signature: 676aieasexw695d7gcxjgnekspyx8hsf X-Rspamd-Server: rspam01 X-HE-Tag: 1695626547-99063 X-HE-Meta: U2FsdGVkX18r8r8hfcwN7O8oIGVbutu99UFC5VO7OLdAeUe5BZmkLd/L2iIBEpKQMiIXJvrYdtm7S2rPsbfm3YCzL+Jkr+tP0ycSi0C3XLcv6USROIdHAi5q0KfszbR/R4XFtTq1RQt0ZckBxXKFj7yfVzUAkVCkYBJSCwrnKCGSLNZ+At+3bH+/rK5gJ1h/SH2IM76DM6e+ElurcJbu6PAVMCUq+f9UcmwFLLtBdYC5dHsh/zlQiA30Aw9lAsPPJ8kkU9JaMqb4DvyUKlXXtuwpbCR05SCklnTrbQ4jXvxjHFCRBnMcAneoxmNWm4wheFQCN3O0TGsR+hSASYGVO3J9i4NNmKbGXXF/UG2gHQ8HjOE2K8bqwVQuojwnjx7qeC7vz+J/xRifJeNtDExiqRGG/S1gfikwtyY30AYfZH2oleq9NBy19/7W9Xw2MTv1gFcQOOH7TD+AL/4afbhSfGBOGGjnJxpDuzWCI4djvPe02F69mRcv/pGbuXGasc99K4rI27yGVwGThrT8EDmBAsMj4UkELG5dgjWh2PE1R0RpIEU2YZ9oPNm+ibMUVXhyZQyuwFpw17szLqB6qa3bLrMCSHx8im1JeLil8TLiFNso9p8WlK/Nkb+URilxeEsYRZ0OaMx2mwwRlvcNzQkVW/8j580C1YNFRBcXbLHH6TzCzCTosF2ZNyTFu9HEEk6kGf6J8zywiQ+Dz+3ddUl5rOt9Uk4g4i+mJaNLx4SrCdlx4yzLnerIXINnHoPcuqPIXWQasUHwqrhT1gtrDfdQylgfTPA2ahxFvH1CqHGdW3oHeEvL5RBhvGiGpr3FGPXVHRAat1UUtMu4LIjKVjqdWEsEp8kxl2yeNBYbEP13HAj+sfHAHdEPSZ3EZlYaj3wYqJqHttFIZmx1ju01X/mpJT7MnsS1Ifx0NQHNWfZkWvOMUKogwdCfWP+at5V4Cqgvhp+tbERjpxX0wnhK0gR 6ksPsEcz r7VZHH4E2CJKWtyor4s3Rqh+LRz6pzBJONE63YIWulevsKfRV0bLxFOYtAj/XPj+PmIo4ArxyeM2D2JseK2Td1lX/SAXZddN7GZjyeKoNPRaFsLYNohLikkwiZIqbhttJe+68 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: memmap_init_range() would set page count of all pages, but the free pages count would be reset in __free_pages_core(). There are opposite operations. It's unnecessary and time-consuming when it's MEMINIT_EARLY context. Init page count in reserve_bootmem_region when in MEMINIT_EARLY context, and check the page count before reset it. At the same time, the INIT_LIST_HEAD in reserve_bootmem_region isn't need, as it already done in __init_single_page. The following data was tested on an x86 machine with 190GB of RAM. before: free_low_memory_core_early() 341ms after: free_low_memory_core_early() 285ms Signed-off-by: Yajun Deng --- mm/mm_init.c | 18 +++++++++++++----- mm/page_alloc.c | 20 ++++++++++++-------- 2 files changed, 25 insertions(+), 13 deletions(-) diff --git a/mm/mm_init.c b/mm/mm_init.c index 61df37133331..64c00ebaf4ef 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -718,7 +718,7 @@ static void __meminit init_reserved_page(unsigned long pfn, int nid) if (zone_spans_pfn(zone, pfn)) break; } - __init_single_page(pfn_to_page(pfn), pfn, zid, nid, INIT_PAGE_COUNT); + __init_single_page(pfn_to_page(pfn), pfn, zid, nid, 0); } #else static inline void pgdat_set_deferred_range(pg_data_t *pgdat) {} @@ -756,8 +756,8 @@ void __meminit reserve_bootmem_region(phys_addr_t start, init_reserved_page(start_pfn, nid); - /* Avoid false-positive PageTail() */ - INIT_LIST_HEAD(&page->lru); + /* Set page count for the reserve region */ + init_page_count(page); /* * no need for atomic set_bit because the struct @@ -888,9 +888,17 @@ void __meminit memmap_init_range(unsigned long size, int nid, unsigned long zone } page = pfn_to_page(pfn); - __init_single_page(page, pfn, zone, nid, INIT_PAGE_COUNT); - if (context == MEMINIT_HOTPLUG) + + /* If the context is MEMINIT_EARLY, we will set page count and + * mark page reserved in reserve_bootmem_region, the free region + * wouldn't have page count and we will check the pages count + * in __free_pages_core. + */ + __init_single_page(page, pfn, zone, nid, 0); + if (context == MEMINIT_HOTPLUG) { + init_page_count(page); __SetPageReserved(page); + } /* * Usually, we want to mark the pageblock MIGRATE_MOVABLE, diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 06be8821d833..b868caabe8dc 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1285,18 +1285,22 @@ void __free_pages_core(struct page *page, unsigned int order) unsigned int loop; /* - * When initializing the memmap, __init_single_page() sets the refcount - * of all pages to 1 ("allocated"/"not free"). We have to set the - * refcount of all involved pages to 0. + * When initializing the memmap, memmap_init_range sets the refcount + * of all pages to 1 ("reserved" and "free") in hotplug context. We + * have to set the refcount of all involved pages to 0. Otherwise, + * we don't do it, as reserve_bootmem_region only set the refcount on + * reserve region ("reserved") in early context. */ - prefetchw(p); - for (loop = 0; loop < (nr_pages - 1); loop++, p++) { - prefetchw(p + 1); + if (page_count(page)) { + prefetchw(p); + for (loop = 0; loop < (nr_pages - 1); loop++, p++) { + prefetchw(p + 1); + __ClearPageReserved(p); + set_page_count(p, 0); + } __ClearPageReserved(p); set_page_count(p, 0); } - __ClearPageReserved(p); - set_page_count(p, 0); atomic_long_add(nr_pages, &page_zone(page)->managed_pages); -- 2.25.1