From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 313BACE7B1F for ; Fri, 29 Sep 2023 09:37:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A0C1E8D00BF; Fri, 29 Sep 2023 05:37:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9BC758D0023; Fri, 29 Sep 2023 05:37:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8AADA8D00BF; Fri, 29 Sep 2023 05:37:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 7B3328D0023 for ; Fri, 29 Sep 2023 05:37:28 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id EA94F1A128F for ; Fri, 29 Sep 2023 09:37:27 +0000 (UTC) X-FDA: 81289132134.03.4B16A62 Received: from out-192.mta1.migadu.com (out-192.mta1.migadu.com [95.215.58.192]) by imf03.hostedemail.com (Postfix) with ESMTP id E05E82000E for ; Fri, 29 Sep 2023 09:37:25 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b="pPKFzh/9"; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf03.hostedemail.com: domain of yajun.deng@linux.dev designates 95.215.58.192 as permitted sender) smtp.mailfrom=yajun.deng@linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1695980246; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ZNsHOigj0XnJbEFXDMySb2RUIiZjhex7b0xOV+U74ms=; b=Z8yFktpm4BIYzdZmacl+RjbxzPk1AF7D5iLpiraibEJ4b409AbbkG5NmcCZZLVS1ou15u4 3ElKPWq7HFCL1fHA5pv7zBjU78zCoTUmnhr6JvQAcf9LGakbyMcyBYksxkatut08SD8DJo w7L0CFcm/xaOWfJXkO1J+IrmxqyQyZQ= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b="pPKFzh/9"; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf03.hostedemail.com: domain of yajun.deng@linux.dev designates 95.215.58.192 as permitted sender) smtp.mailfrom=yajun.deng@linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1695980246; a=rsa-sha256; cv=none; b=Q5vibl13oQYtxba6MAeD0lPchUGjKv5xs0sL7OgW/hiGtoKOPLV3hVSbnLQHOCuRrsqZDz amZ+ssZbemlUsVkd8P0Dlrmby+K8RSEm/nhlgWdUjeKy52fzen6B7lyNdCUA+djimoIJfe +9ajLEmijGzhRg8m2P2av8JurrG2y04= Message-ID: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1695980243; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ZNsHOigj0XnJbEFXDMySb2RUIiZjhex7b0xOV+U74ms=; b=pPKFzh/9h/gB7qDkTsZ83d34bfI2yuw0obRHjsfJ7NnNmRTjE4DEf8x2jJPJZ+3xt6M5Yc s2jk9/LHgH9wEd7+PvU7eyxuCWFnnLL2oO2GJP6h8ybAPRd7MPqiVWWVe7YoQWgyUbhUxy nBQO19Z/hV1iF3Ijx4WHmMuVJ6gDqvw= Date: Fri, 29 Sep 2023 17:37:13 +0800 MIME-Version: 1.0 Subject: Re: [PATCH v4 1/2] mm: pass page count and reserved to __init_single_page Content-Language: en-US To: Mike Rapoport Cc: akpm@linux-foundation.org, mike.kravetz@oracle.com, muchun.song@linux.dev, willy@infradead.org, david@redhat.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20230928083302.386202-1-yajun.deng@linux.dev> <20230928083302.386202-2-yajun.deng@linux.dev> <20230929081938.GT3303@kernel.org> X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Yajun Deng In-Reply-To: <20230929081938.GT3303@kernel.org> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Migadu-Flow: FLOW_OUT X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: E05E82000E X-Stat-Signature: ssgune7p6d7556ojyg1mshdaa48a5bcr X-Rspam-User: X-HE-Tag: 1695980245-433665 X-HE-Meta: U2FsdGVkX1+IR5T1A2aEuRo6a9vIhSRijEhsRgVEqeQrlbxoK0Uuv53S83i7lt623bpxO6nV54OaYBMZl0PF6V6kwGoilLAL+xe4qTrkq8mTz5nA515YVj0MIQ1k5Tx5Fxphlu47NDWXHPmUJWd+U/bw2LqubzvPVbnEVMMNNvTYAdS+Bfn+LwFQUiKplgt3NdotSDuODoVfCFwVxXzMjPzqkuAyb+xGutemj3z8HBEg79W8J5Qbdzxa1stGURF6fL+953Yzi9yVIdBqpr3iCsDhkxLpLxW8ZjwXiqOubf526HJnjRYW9xOx4oX028TOGuJsilrwIgFzDji5IJlydsiAdXlMWjr+98Oo/cTkFexKwjxN0tDrf5AVfM2FL6djgYQJP5eGBFAy9mba2lQThBGsPHydDma/F0r06EAdIE4WG6eI/KhlJXv5Tf3hHqp5wtVOp9q4+l1RBAmr2qPvvgpg0n2lFGBDGmu5/HUGVbNDjhPSVD+ixtpOb4I+3GCmxJ6m4l/u4avOEtg88WN5WHtiRXNR+FVOBGk7Ut243RmMWLu/XLdPNwA8bB5KSp1zpaiuD53OA2GgUOWZ0dFEAhajSGqulNytRVlAb9Ztu3H6bhnwuWrIODaGHwn3MYCTSQv2HBQuo2VeCZYEMV9SEKauyW+XqMwAv/hO17ZoQy8Z1ZC5FSQl/p20PU7+7pUDx51JUs6DZhx0ny/INMcYiJKgW6S8t/AFTKGBX3QegxBdxHO+wPARHhx/MvzCzT6Gf/VuDiE+GTU8+y2fOgR+TAjBsSBewdXtKgUEMj97A20WJrWJEQlF2KGmVTCYTz0WsqJrxJhfx8jE+dQW3ryv/tdi40wN51suAQPR7F7+s0+ZjScshGIpy1WU8Um83J+BqIbmf2fu/WJC/0Sczt6XEYQYKHPjbOPQTLPIlMHDgJ77qQJG7qTHtDz3pPnmQBPMD3CeUtuni50RUGX2yrZ nx2ZHCHv GGNURU+gzCLWqfEvnkkA7B+DQ+d0wD/067PRv3lrjZyi18LdOZhP3hhg63+x113zxkXCms2/a2Szkem9E9+0Zpnapigd99j4A0eip/0C2+IBdisXWPzuP26K4w8KI9tZQk2kAlaocnNoNinM6gHcGVRPXYIPJ5FS0j/auRDFxU7toRFhQNRJC0fqa4A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2023/9/29 16:19, Mike Rapoport wrote: > On Thu, Sep 28, 2023 at 04:33:01PM +0800, Yajun Deng wrote: >> Subject: mm: pass page count and reserved to __init_single_page > We add pass flags that tell __init_single_page() how to initialize page > count and PG_Reserved, I think a better subject would be: > > mm: allow optional initialization of page count and PG_reserved flag Okay. >> When we init a single page, we need to mark that page reserved when it >> is reserved. And some pages need to reset page count, such as compound >> pages. >> >> Introduce enum init_page_flags, the caller will init page count and mark >> page reserved by passing INIT_PAGE_COUNT and INIT_PAGE_RESERVED. > This does not really describe why the change is needed. How about > > __init_single_page() unconditionally resets page count which is unnecessary > for reserved pages. > > To allow skipping page count initialization and marking a page reserved in > one go add flags parameter to __init_single_page(). > > No functional changes. Okay. >> Signed-off-by: Yajun Deng >> --- >> v4: move the changes of __init_zone_device_page(). >> v3: Introduce enum init_page_flags. >> v2: Introduce INIT_PAGE_COUNT and INIT_PAGE_RESERVED. >> v1: https://lore.kernel.org/all/20230922070923.355656-1-yajun.deng@linux.dev/ >> --- >> mm/hugetlb.c | 2 +- >> mm/internal.h | 8 +++++++- >> mm/mm_init.c | 24 +++++++++++++----------- >> 3 files changed, 21 insertions(+), 13 deletions(-) >> >> diff --git a/mm/hugetlb.c b/mm/hugetlb.c >> index a82dc37669b0..bb9c334a8392 100644 >> --- a/mm/hugetlb.c >> +++ b/mm/hugetlb.c >> @@ -3196,7 +3196,7 @@ static void __init hugetlb_folio_init_tail_vmemmap(struct folio *folio, >> for (pfn = head_pfn + start_page_number; pfn < end_pfn; pfn++) { >> struct page *page = pfn_to_page(pfn); >> >> - __init_single_page(page, pfn, zone, nid); >> + __init_single_page(page, pfn, zone, nid, INIT_PAGE_COUNT); >> prep_compound_tail((struct page *)folio, pfn - head_pfn); >> ret = page_ref_freeze(page, 1); >> VM_BUG_ON(!ret); >> diff --git a/mm/internal.h b/mm/internal.h >> index d7916f1e9e98..449891ad7fdb 100644 >> --- a/mm/internal.h >> +++ b/mm/internal.h >> @@ -1209,8 +1209,14 @@ struct vma_prepare { >> struct vm_area_struct *remove2; >> }; >> >> +enum init_page_flags { > enum page_init_flags please Okay. >> + INIT_PAGE_COUNT = (1 << 0), >> + INIT_PAGE_RESERVED = (1 << 1), >> +}; >> + >> void __meminit __init_single_page(struct page *page, unsigned long pfn, >> - unsigned long zone, int nid); >> + unsigned long zone, int nid, >> + enum init_page_flags flags); >> >> /* shrinker related functions */ >> unsigned long shrink_slab(gfp_t gfp_mask, int nid, struct mem_cgroup *memcg, >> diff --git a/mm/mm_init.c b/mm/mm_init.c >> index 06a72c223bce..9716c8a7ade9 100644 >> --- a/mm/mm_init.c >> +++ b/mm/mm_init.c >> @@ -557,11 +557,11 @@ static void __init find_zone_movable_pfns_for_nodes(void) >> } >> >> void __meminit __init_single_page(struct page *page, unsigned long pfn, >> - unsigned long zone, int nid) >> + unsigned long zone, int nid, >> + enum init_page_flags flags) >> { >> mm_zero_struct_page(page); >> set_page_links(page, zone, nid, pfn); >> - init_page_count(page); >> page_mapcount_reset(page); >> page_cpupid_reset_last(page); >> page_kasan_tag_reset(page); >> @@ -572,6 +572,10 @@ void __meminit __init_single_page(struct page *page, unsigned long pfn, >> if (!is_highmem_idx(zone)) >> set_page_address(page, __va(pfn << PAGE_SHIFT)); >> #endif >> + if (flags & INIT_PAGE_COUNT) >> + init_page_count(page); >> + if (flags & INIT_PAGE_RESERVED) >> + __SetPageReserved(page); >> } >> >> #ifdef CONFIG_NUMA >> @@ -714,7 +718,7 @@ static void __meminit init_reserved_page(unsigned long pfn, int nid) >> if (zone_spans_pfn(zone, pfn)) >> break; >> } >> - __init_single_page(pfn_to_page(pfn), pfn, zid, nid); >> + __init_single_page(pfn_to_page(pfn), pfn, zid, nid, INIT_PAGE_COUNT); > There is __SetPageReserved call a few lines below, it can be folded here. > No, There is a #ifdef in front of it. If so, I need to add __SetPageReserved to another init_reserved_page(). And there is an return before __init_single_page. I will change INIT_PAGE_COUNT to 0 in next patch. >> } >> #else >> static inline void pgdat_set_deferred_range(pg_data_t *pgdat) {} >> @@ -821,8 +825,8 @@ static void __init init_unavailable_range(unsigned long spfn, >> pfn = pageblock_end_pfn(pfn) - 1; >> continue; >> } >> - __init_single_page(pfn_to_page(pfn), pfn, zone, node); >> - __SetPageReserved(pfn_to_page(pfn)); >> + __init_single_page(pfn_to_page(pfn), pfn, zone, node, >> + INIT_PAGE_COUNT | INIT_PAGE_RESERVED); >> pgcnt++; >> } >> >> @@ -884,7 +888,7 @@ void __meminit memmap_init_range(unsigned long size, int nid, unsigned long zone >> } >> >> page = pfn_to_page(pfn); >> - __init_single_page(page, pfn, zone, nid); >> + __init_single_page(page, pfn, zone, nid, INIT_PAGE_COUNT); >> if (context == MEMINIT_HOTPLUG) >> __SetPageReserved(page); >> >> @@ -967,9 +971,6 @@ static void __ref __init_zone_device_page(struct page *page, unsigned long pfn, >> unsigned long zone_idx, int nid, >> struct dev_pagemap *pgmap) >> { >> - >> - __init_single_page(page, pfn, zone_idx, nid); >> - >> /* >> * Mark page reserved as it will need to wait for onlining >> * phase for it to be fully associated with a zone. >> @@ -977,7 +978,8 @@ static void __ref __init_zone_device_page(struct page *page, unsigned long pfn, >> * We can use the non-atomic __set_bit operation for setting >> * the flag as we are still initializing the pages. >> */ >> - __SetPageReserved(page); >> + __init_single_page(page, pfn, zone_idx, nid, >> + INIT_PAGE_COUNT | INIT_PAGE_RESERVED); >> >> /* >> * ZONE_DEVICE pages union ->lru with a ->pgmap back pointer >> @@ -2058,7 +2060,7 @@ static unsigned long __init deferred_init_pages(struct zone *zone, >> } else { >> page++; >> } >> - __init_single_page(page, pfn, zid, nid); >> + __init_single_page(page, pfn, zid, nid, INIT_PAGE_COUNT); >> nr_pages++; >> } >> return (nr_pages); >> -- >> 2.25.1 >>