From: "David Hildenbrand (Arm)" <david@kernel.org>
To: Zi Yan <ziy@nvidia.com>, Muhammad Usama Anjum <usama.anjum@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Lorenzo Stoakes <ljs@kernel.org>,
"Liam R . Howlett" <Liam.Howlett@oracle.com>,
Vlastimil Babka <vbabka@kernel.org>,
Mike Rapoport <rppt@kernel.org>,
Suren Baghdasaryan <surenb@google.com>,
Michal Hocko <mhocko@suse.com>,
Brendan Jackman <jackmanb@google.com>,
Johannes Weiner <hannes@cmpxchg.org>,
Uladzislau Rezki <urezki@gmail.com>,
Nick Terrell <terrelln@fb.com>, David Sterba <dsterba@suse.com>,
Vishal Moola <vishal.moola@gmail.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
bpf@vger.kernel.org, Ryan.Roberts@arm.com,
david.hildenbrand@arm.com
Subject: Re: [PATCH v3 3/3] mm/page_alloc: Optimize __free_contig_frozen_range()
Date: Wed, 25 Mar 2026 11:14:59 +0100 [thread overview]
Message-ID: <4ca68533-9abf-4d54-ba15-5ab9bc3ed0dc@kernel.org> (raw)
In-Reply-To: <309C0716-A53B-422D-ABDE-E865DE0473BB@nvidia.com>
On 3/24/26 16:06, Zi Yan wrote:
> On 24 Mar 2026, at 9:35, Muhammad Usama Anjum wrote:
>
>> Apply the same batch-freeing optimization from free_contig_range() to the
>> frozen page path. The previous __free_contig_frozen_range() freed each
>> order-0 page individually via free_frozen_pages(), which is slow for the
>> same reason the old free_contig_range() was: each page goes to the
>> order-0 pcp list rather than being coalesced into higher-order blocks.
>>
>> Rewrite __free_contig_frozen_range() to call free_pages_prepare() for
>> each order-0 page, then batch the prepared pages into the largest
>> possible power-of-2 aligned chunks via free_prepared_contig_range().
>> If free_pages_prepare() fails (e.g. HWPoison, bad page) the page is
>> deliberately not freed; it should not be returned to the allocator.
>>
>> I've tested CMA through debugfs. The test allocates 16384 pages per
>> allocation for several iterations. There is 3.5x improvement.
>>
>> Before: 1406 usec per iteration
>> After: 402 usec per iteration
>>
>> Before:
>>
>> 70.89% 0.69% cma [kernel.kallsyms] [.] free_contig_frozen_range
>> |
>> |--70.20%--free_contig_frozen_range
>> | |
>> | |--46.41%--__free_frozen_pages
>> | | |
>> | | --36.18%--free_frozen_page_commit
>> | | |
>> | | --29.63%--_raw_spin_unlock_irqrestore
>> | |
>> | |--8.76%--_raw_spin_trylock
>> | |
>> | |--7.03%--__preempt_count_dec_and_test
>> | |
>> | |--4.57%--_raw_spin_unlock
>> | |
>> | |--1.96%--__get_pfnblock_flags_mask.isra.0
>> | |
>> | --1.15%--free_frozen_page_commit
>> |
>> --0.69%--el0t_64_sync
>>
>> After:
>>
>> 23.57% 0.00% cma [kernel.kallsyms] [.] free_contig_frozen_range
>> |
>> ---free_contig_frozen_range
>> |
>> |--20.45%--__free_contig_frozen_range
>> | |
>> | |--17.77%--free_pages_prepare
>> | |
>> | --0.72%--free_prepared_contig_range
>> | |
>> | --0.55%--__free_frozen_pages
>> |
>> --3.12%--free_pages_prepare
>>
>> Suggested-by: Zi Yan <ziy@nvidia.com>
>> Signed-off-by: Muhammad Usama Anjum <usama.anjum@arm.com>
>> ---
>> Changes since v2:
>> - Rework the loop to check for memory sections just like __free_contig_range()
>> - Didn't add reviewed-by tags because of rework
>> ---
>> mm/page_alloc.c | 26 ++++++++++++++++++++++++--
>> 1 file changed, 24 insertions(+), 2 deletions(-)
>>
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 250cc07e547b8..26eac35ef73bd 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -7038,8 +7038,30 @@ static int __alloc_contig_verify_gfp_mask(gfp_t gfp_mask, gfp_t *gfp_cc_mask)
>>
>> static void __free_contig_frozen_range(unsigned long pfn, unsigned long nr_pages)
>> {
>> - for (; nr_pages--; pfn++)
>> - free_frozen_pages(pfn_to_page(pfn), 0);
>> + struct page *page = pfn_to_page(pfn);
>> + struct page *start = NULL;
>> + unsigned long start_sec;
>> + unsigned long i;
>> +
>> + for (i = 0; i < nr_pages; i++, page++) {
>> + if (!free_pages_prepare(page, 0)) {
>> + if (start) {
>> + free_prepared_contig_range(start, page - start);
>> + start = NULL;
>> + }
>> + } else if (start &&
>> + memdesc_section(page->flags) != start_sec) {
>> + free_prepared_contig_range(start, page - start);
>> + start = page;
>> + start_sec = memdesc_section(page->flags);
>> + } else if (!start) {
>> + start = page;
>> + start_sec = memdesc_section(page->flags);
>> + }
>> + }
>> +
>> + if (start)
>> + free_prepared_contig_range(start, page - start);
>> }
>
> This looks almost the same as __free_contig_range().
>
> Two approaches to deduplicate the code:
>
> 1. __free_contig_range() first does put_page_testzero()
> on all pages and call __free_contig_frozen_range()
> on the range, __free_contig_frozen_range() will need
> to skip not frozen pages. It is not ideal.
Right, let's not do that.
>
> 2. add a helper function
> __free_contig_range_common(unsigned long pfn,
> unsigned long nr_pages, bool is_page_frozen),
> and
> a. call __free_contig_range_common(..., /*is_page_frozen=*/ false)
> in __free_contig_range(),
> b. __free_contig_range_common(..., /*is_page_frozen=*/ true)
> in __free_contig_frozen_range().
>
As long as it's an internal helper, that makes sense. I wouldn't want to
expose the bool in the external interface.
Thanks!
--
Cheers,
David
next prev parent reply other threads:[~2026-03-25 10:15 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-24 13:35 [PATCH v3 0/3] mm: Free contiguous order-0 pages efficiently Muhammad Usama Anjum
2026-03-24 13:35 ` [PATCH v3 1/3] mm/page_alloc: Optimize free_contig_range() Muhammad Usama Anjum
2026-03-24 14:46 ` Zi Yan
2026-03-24 15:22 ` David Hildenbrand
2026-03-24 17:14 ` Zi Yan
2026-03-25 14:06 ` Muhammad Usama Anjum
2026-03-24 20:56 ` David Hildenbrand (Arm)
2026-03-25 14:11 ` Muhammad Usama Anjum
2026-03-24 13:35 ` [PATCH v3 2/3] vmalloc: Optimize vfree Muhammad Usama Anjum
2026-03-24 14:55 ` Zi Yan
2026-03-25 8:56 ` Uladzislau Rezki
2026-03-25 15:02 ` Muhammad Usama Anjum
2026-03-25 16:16 ` Uladzislau Rezki
2026-03-25 16:25 ` Muhammad Usama Anjum
2026-03-25 16:34 ` David Hildenbrand (Arm)
2026-03-25 16:49 ` Uladzislau Rezki
2026-03-25 14:34 ` Usama Anjum
2026-03-25 10:05 ` David Hildenbrand (Arm)
2026-03-25 14:26 ` Muhammad Usama Anjum
2026-03-25 15:01 ` David Hildenbrand (Arm)
2026-03-24 13:35 ` [PATCH v3 3/3] mm/page_alloc: Optimize __free_contig_frozen_range() Muhammad Usama Anjum
2026-03-24 15:06 ` Zi Yan
2026-03-25 10:14 ` David Hildenbrand (Arm) [this message]
2026-03-25 16:03 ` Muhammad Usama Anjum
2026-03-25 19:52 ` Zi Yan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4ca68533-9abf-4d54-ba15-5ab9bc3ed0dc@kernel.org \
--to=david@kernel.org \
--cc=Liam.Howlett@oracle.com \
--cc=Ryan.Roberts@arm.com \
--cc=akpm@linux-foundation.org \
--cc=bpf@vger.kernel.org \
--cc=david.hildenbrand@arm.com \
--cc=dsterba@suse.com \
--cc=hannes@cmpxchg.org \
--cc=jackmanb@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ljs@kernel.org \
--cc=mhocko@suse.com \
--cc=rppt@kernel.org \
--cc=surenb@google.com \
--cc=terrelln@fb.com \
--cc=urezki@gmail.com \
--cc=usama.anjum@arm.com \
--cc=vbabka@kernel.org \
--cc=vishal.moola@gmail.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox