From: "David Hildenbrand (Arm)" <david@kernel.org>
To: Hrushikesh Salunke <hsalunke@amd.com>,
akpm@linux-foundation.org, ljs@kernel.org,
Liam.Howlett@oracle.com, vbabka@kernel.org, rppt@kernel.org,
surenb@google.com, mhocko@suse.com, jackmanb@google.com,
hannes@cmpxchg.org, ziy@nvidia.com
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
rkodsara@amd.com, bharata@amd.com, ankur.a.arora@oracle.com,
shivankg@amd.com
Subject: Re: [PATCH v3] mm/page_alloc: replace kernel_init_pages() with batch page clearing
Date: Wed, 22 Apr 2026 20:25:48 +0200 [thread overview]
Message-ID: <be256a32-dd9f-4dcb-b9b4-3f2a7fcde70c@kernel.org> (raw)
In-Reply-To: <20260422102729.166599-1-hsalunke@amd.com>
On 4/22/26 12:26, Hrushikesh Salunke wrote:
> When init_on_alloc is enabled, kernel_init_pages() clears every page
> one at a time via clear_highpage_kasan_tagged(), which incurs per-page
> kmap_local_page()/kunmap_local() overhead and prevents the architecture
> clearing primitive from operating on contiguous ranges.
>
> Introduce clear_highpages_kasan_tagged() in highmem.h, a batch
> clearing helper that calls clear_pages() for the full contiguous range
> on !HIGHMEM systems, bypassing the per-page kmap overhead and allowing
> a single invocation of the arch clearing primitive across the entire
> allocation. The HIGHMEM path falls back to per-page clearing since
> those pages require kmap.
>
> Replace kernel_init_pages() with direct calls to the new helper, as it
> becomes a trivial wrapper.
>
> Allocating 8192 x 2MB HugeTLB pages (16GB) with init_on_alloc=1:
>
> Before: 0.445s
> After: 0.166s (-62.7%, 2.68x faster)
>
> Kernel time (sys) reduction per workload with init_on_alloc=1:
>
> Workload Before After Change
> Graph500 64C128T 30m 41.8s 15m 14.8s -50.3%
> Graph500 16C32T 15m 56.7s 9m 43.7s -39.0%
> Pagerank 32T 1m 58.5s 1m 12.8s -38.5%
> Pagerank 128T 2m 36.3s 1m 40.4s -35.7%
We do have some elaborate handling in clear_contig_highpages() to chunk it up
(and to call cond_resched()). But that function can get called with much bigger
ranges.
I'm not concerned about the cond_resched() -- we wouldn't do one here before --
but I'm wondering whether we could end up triggering a HW instruction that is
uninterruptible and takes a rather long time.
But clear_contig_highpages() breaks it into 32MiB chunks, and only x86 supports
it so far. So we won't exceed that with the maximum buddy order of 4MiB on x86.
Acked-by: David Hildenbrand (Arm) <david@kernel.org>
--
Cheers,
David
next prev parent reply other threads:[~2026-04-22 18:25 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-22 10:26 [PATCH v3] mm/page_alloc: replace kernel_init_pages() with batch page clearing Hrushikesh Salunke
2026-04-22 18:25 ` David Hildenbrand (Arm) [this message]
2026-04-23 5:09 ` Salunke, Hrushikesh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=be256a32-dd9f-4dcb-b9b4-3f2a7fcde70c@kernel.org \
--to=david@kernel.org \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=ankur.a.arora@oracle.com \
--cc=bharata@amd.com \
--cc=hannes@cmpxchg.org \
--cc=hsalunke@amd.com \
--cc=jackmanb@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ljs@kernel.org \
--cc=mhocko@suse.com \
--cc=rkodsara@amd.com \
--cc=rppt@kernel.org \
--cc=shivankg@amd.com \
--cc=surenb@google.com \
--cc=vbabka@kernel.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox