From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 368DC3F65EE for ; Mon, 11 May 2026 13:46:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778507182; cv=none; b=TeuquzMgBZg24GTLtYuCDRER/99lFHhxIemiS2uCo1bPT+F83Ho6knar4qo5ttQd6lf5prBj49yl8n5kUWZyoawYXVvDmCQSpjHjTaff3G61ro591C8RYT+bKle7cAw72Vyp4NmdfApXzBPnOM5otr8k2lwvrF3kPnKI5yWIT/w= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778507182; c=relaxed/simple; bh=diU94gwRD6VOFmXfwLDZzUsfitRzCp5Ws/tXr1MqHZs=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=YpMcnpmSWUeo5MolIKyVobFGOMsWkPetOfxLZiCG5ET9pNt9w8fkyOKDmE9eucC2POdbJ215wj5fAIb3Gixb8b8t2lLOagSeu/G4+gcNyS+EeaAMMoXeOQzbulN/tfX2dUJ4FDKm/dXiWu+otKolb20G9vLW/Mc6QhifLeZ88go= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=tUO0xHo2; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="tUO0xHo2" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B425EC2BCB0; Mon, 11 May 2026 13:46:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1778507181; bh=diU94gwRD6VOFmXfwLDZzUsfitRzCp5Ws/tXr1MqHZs=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From; b=tUO0xHo2KuV3/ggS9q3Czj438plTKhE6leYB2ZDQ6RnvehfVEqAycGFkjsVY5KPQo dfjULAKePyXHeJnlxYeyrExiDx+JpnttyuJpx3lh/EqsS2SuxaZXYaWuh6YZadCQDg RxcF77MOnziDHzPIrht/ah2AkvONnWj0uC5kSp/fJDzKVbo144tDPMht4yTuxnntHP ZRskTutXLbyylrxryB4WEtSRAqFEJOnaF2j9Y6mWBEahgGULlZG1zCXdPOXpF2tQHf GRx8MCYq3S3bbtYFRGxukF5x7nAdJX59i5w+vNxCT7Os+2vVPsACNEITEHjJmK6EAC ocXCPvJoM73NA== Message-ID: Date: Mon, 11 May 2026 15:46:14 +0200 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2 08/22] mm: introduce for_each_free_list() Content-Language: en-US To: Brendan Jackman , Borislav Petkov , Dave Hansen , Peter Zijlstra , Andrew Morton , David Hildenbrand , Wei Xu , Johannes Weiner , Zi Yan , Lorenzo Stoakes Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, x86@kernel.org, rppt@kernel.org, Sumit Garg , derkling@google.com, reijiw@google.com, Will Deacon , rientjes@google.com, "Kalyazin, Nikita" , patrick.roy@linux.dev, "Itazuri, Takahiro" , Andy Lutomirski , David Kaplan , Thomas Gleixner , Yosry Ahmed References: <20260320-page_alloc-unmapped-v2-0-28bf1bd54f41@google.com> <20260320-page_alloc-unmapped-v2-8-28bf1bd54f41@google.com> From: "Vlastimil Babka (SUSE)" In-Reply-To: <20260320-page_alloc-unmapped-v2-8-28bf1bd54f41@google.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit On 3/20/26 19:23, Brendan Jackman wrote: > Later patches will rearrange the free areas, but there are a couple of > places that iterate over them with the assumption that they have the > current structure. > > It seems ideally, code outside of mm should not be directly aware of > struct free_area in the first place, but that awareness seems relatively > harmless so just make the minimal change. I think we should lift the code from kernel/power/snapshot.c to under mm/ eventually, ISTR discussing it somewhere recently. But doesn't have to be in this series. > Now instead of letting users manually iterate over the free lists, just > provide a macro to do that. Then adopt that macro in a couple of places. > > Signed-off-by: Brendan Jackman Reviewed-by: Vlastimil Babka (SUSE) > --- > include/linux/mmzone.h | 7 +++++-- > kernel/power/snapshot.c | 8 ++++---- > mm/mm_init.c | 11 +++++++---- > 3 files changed, 16 insertions(+), 10 deletions(-) > > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h > index 7bd0134c241ce..c49e3cdf4f6bb 100644 > --- a/include/linux/mmzone.h > +++ b/include/linux/mmzone.h > @@ -177,9 +177,12 @@ static inline bool migratetype_is_mergeable(int mt) > return mt < MIGRATE_PCPTYPES; > } > > -#define for_each_migratetype_order(order, type) \ > +#define for_each_free_list(list, zone, order) \ > for (order = 0; order < NR_PAGE_ORDERS; order++) \ > - for (type = 0; type < MIGRATE_TYPES; type++) > + for (unsigned int type = 0; \ > + list = &zone->free_area[order].free_list[type], \ > + type < MIGRATE_TYPES; \ > + type++) \ > > extern int page_group_by_mobility_disabled; > > diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c > index 7dcccf378cc2f..abd33ca13eec4 100644 > --- a/kernel/power/snapshot.c > +++ b/kernel/power/snapshot.c > @@ -1245,8 +1245,9 @@ unsigned int snapshot_additional_pages(struct zone *zone) > static void mark_free_pages(struct zone *zone) > { > unsigned long pfn, max_zone_pfn, page_count = WD_PAGE_COUNT; > + struct list_head *free_list; > unsigned long flags; > - unsigned int order, t; > + unsigned int order; > struct page *page; > > if (zone_is_empty(zone)) > @@ -1270,9 +1271,8 @@ static void mark_free_pages(struct zone *zone) > swsusp_unset_page_free(page); > } > > - for_each_migratetype_order(order, t) { > - list_for_each_entry(page, > - &zone->free_area[order].free_list[t], buddy_list) { > + for_each_free_list(free_list, zone, order) { > + list_for_each_entry(page, free_list, buddy_list) { > unsigned long i; > > pfn = page_to_pfn(page); > diff --git a/mm/mm_init.c b/mm/mm_init.c > index 969048f9b320c..f6f9455bc42b6 100644 > --- a/mm/mm_init.c > +++ b/mm/mm_init.c > @@ -1445,11 +1445,14 @@ static void __meminit zone_init_internals(struct zone *zone, enum zone_type idx, > > static void __meminit zone_init_free_lists(struct zone *zone) > { > - unsigned int order, t; > - for_each_migratetype_order(order, t) { > - INIT_LIST_HEAD(&zone->free_area[order].free_list[t]); > + struct list_head *list; > + unsigned int order; > + > + for_each_free_list(list, zone, order) > + INIT_LIST_HEAD(list); > + > + for (order = 0; order < NR_PAGE_ORDERS; order++) > zone->free_area[order].nr_free = 0; > - } > > #ifdef CONFIG_UNACCEPTED_MEMORY > INIT_LIST_HEAD(&zone->unaccepted_pages); >