From: "David Hildenbrand (Arm)" <david@kernel.org>
To: "Salunke, Hrushikesh" <hsalunke@amd.com>,
Andrew Morton <akpm@linux-foundation.org>
Cc: ljs@kernel.org, Liam.Howlett@oracle.com, vbabka@kernel.org,
rppt@kernel.org, surenb@google.com, mhocko@suse.com,
jackmanb@google.com, hannes@cmpxchg.org, ziy@nvidia.com,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
rkodsara@amd.com, bharata@amd.com, ankur.a.arora@oracle.com,
shivankg@amd.com
Subject: Re: [PATCH v3] mm/page_alloc: replace kernel_init_pages() with batch page clearing
Date: Fri, 24 Apr 2026 10:52:06 +0200 [thread overview]
Message-ID: <1253ca14-69de-418f-8f94-b08e8105e924@kernel.org> (raw)
In-Reply-To: <ef5ad58f-0641-4017-a199-defd25430bed@amd.com>
On 4/24/26 10:42, Salunke, Hrushikesh wrote:
>
> On 23-04-2026 16:42, Andrew Morton wrote:
>> Caution: This message originated from an External Source. Use proper caution when opening attachments, clicking links, or responding.
>>
>>
>> On Wed, 22 Apr 2026 10:26:58 +0000 Hrushikesh Salunke <hsalunke@amd.com> wrote:
>>
>>> When init_on_alloc is enabled, kernel_init_pages() clears every page
>>> one at a time via clear_highpage_kasan_tagged(), which incurs per-page
>>> kmap_local_page()/kunmap_local() overhead and prevents the architecture
>>> clearing primitive from operating on contiguous ranges.
>>>
>>> Introduce clear_highpages_kasan_tagged() in highmem.h, a batch
>>> clearing helper that calls clear_pages() for the full contiguous range
>>> on !HIGHMEM systems, bypassing the per-page kmap overhead and allowing
>>> a single invocation of the arch clearing primitive across the entire
>>> allocation. The HIGHMEM path falls back to per-page clearing since
>>> those pages require kmap.
>>>
>>> Replace kernel_init_pages() with direct calls to the new helper, as it
>>> becomes a trivial wrapper.
>>>
>>> Allocating 8192 x 2MB HugeTLB pages (16GB) with init_on_alloc=1:
>>>
>>> Before: 0.445s
>>> After: 0.166s (-62.7%, 2.68x faster)
>> Nice.
>>
>>> Kernel time (sys) reduction per workload with init_on_alloc=1:
>>>
>>> Workload Before After Change
>>> Graph500 64C128T 30m 41.8s 15m 14.8s -50.3%
>>> Graph500 16C32T 15m 56.7s 9m 43.7s -39.0%
>>> Pagerank 32T 1m 58.5s 1m 12.8s -38.5%
>>> Pagerank 128T 2m 36.3s 1m 40.4s -35.7%
>>>
>>> ...
>>>
>>> --- a/include/linux/highmem.h
>>> +++ b/include/linux/highmem.h
>>> @@ -345,6 +345,21 @@ static inline void clear_highpage_kasan_tagged(struct page *page)
>>> kunmap_local(kaddr);
>>> }
>>>
>>> +static inline void clear_highpages_kasan_tagged(struct page *page, int numpages)
>>> +{
>>> + /* s390's use of memset() could override KASAN redzones. */
>>> + kasan_disable_current();
>>> + if (!IS_ENABLED(CONFIG_HIGHMEM)) {
>>> + clear_pages(kasan_reset_tag(page_address(page)), numpages);
>>> + } else {
>>> + int i;
>>> +
>>> + for (i = 0; i < numpages; i++)
>>> + clear_highpage_kasan_tagged(page + i);
>>> + }
>>> + kasan_enable_current();
>>> +}
>> Why was it globally published and inlined? Is there any expectation
>> that this will be used outside of page_alloc.c?
>>
>> Both of the callsites are themselves inlined. The patch adds 330 bytes
>> to my arm allmodcnfig page_alloc.o - did we gain anything from that?
>>
> Hi Andrew,
>
> The idea was to keep it alongside clear_highpage_kasan_tagged() as its
> batch counterpart, but currently it is only used by page_alloc.c.
Right.
Looking at init_vmalloc_pages(), I wonder if it could also benefit from batching
if we find that pages are actually contiguous.
That would require looking up multiple pages at once. vmalloc_to_pages() or sth
like that. Surely, doing such an optimized page table walk could be beneficial
by itself.
>
> Your concern about the code size increase is valid. Would you prefer if
> I move it to page_alloc.c as a static function and drop the inline
> in v4? If an external user comes along later it can always be moved
> back to the header.
What is exactly is responsible for the code increase? Two calls in
clear_highpages_kasan_tagged()?
Surely the compiler would just inline kernel_init_pages() already?
So my best guess that the 330 bytes are just clear_pages() overhead or some code
layout changes?
--
Cheers,
David
next prev parent reply other threads:[~2026-04-24 8:52 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-22 10:26 [PATCH v3] mm/page_alloc: replace kernel_init_pages() with batch page clearing Hrushikesh Salunke
2026-04-22 18:25 ` David Hildenbrand (Arm)
2026-04-23 5:09 ` Salunke, Hrushikesh
2026-04-23 10:13 ` David Hildenbrand (Arm)
2026-04-23 11:12 ` Andrew Morton
2026-04-24 8:42 ` Salunke, Hrushikesh
2026-04-24 8:52 ` David Hildenbrand (Arm) [this message]
2026-04-28 3:55 ` Salunke, Hrushikesh
2026-04-28 7:06 ` David Hildenbrand (Arm)
2026-04-28 8:31 ` Ankur Arora
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1253ca14-69de-418f-8f94-b08e8105e924@kernel.org \
--to=david@kernel.org \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=ankur.a.arora@oracle.com \
--cc=bharata@amd.com \
--cc=hannes@cmpxchg.org \
--cc=hsalunke@amd.com \
--cc=jackmanb@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ljs@kernel.org \
--cc=mhocko@suse.com \
--cc=rkodsara@amd.com \
--cc=rppt@kernel.org \
--cc=shivankg@amd.com \
--cc=surenb@google.com \
--cc=vbabka@kernel.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox