From: "David Hildenbrand (Arm)" <david@kernel.org>
To: "Salunke, Hrushikesh" <hsalunke@amd.com>,
Andrew Morton <akpm@linux-foundation.org>
Cc: ljs@kernel.org, Liam.Howlett@oracle.com, vbabka@kernel.org,
rppt@kernel.org, surenb@google.com, mhocko@suse.com,
jackmanb@google.com, hannes@cmpxchg.org, ziy@nvidia.com,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
rkodsara@amd.com, bharata@amd.com, ankur.a.arora@oracle.com,
shivankg@amd.com
Subject: Re: [PATCH v3] mm/page_alloc: replace kernel_init_pages() with batch page clearing
Date: Tue, 28 Apr 2026 09:06:14 +0200 [thread overview]
Message-ID: <b0243130-dd6c-4c9f-bbc2-7458aa0615c9@kernel.org> (raw)
In-Reply-To: <f75e03ce-68d1-474a-9b85-157f73af48dc@amd.com>
On 4/28/26 05:55, Salunke, Hrushikesh wrote:
>
> On 24-04-2026 14:22, David Hildenbrand (Arm) wrote:
>> Caution: This message originated from an External Source. Use proper caution when opening attachments, clicking links, or responding.
>>
>>
>> On 4/24/26 10:42, Salunke, Hrushikesh wrote:
>>> Hi Andrew,
>>>
>>> The idea was to keep it alongside clear_highpage_kasan_tagged() as its
>>> batch counterpart, but currently it is only used by page_alloc.c.
>> Right.
>>
>> Looking at init_vmalloc_pages(), I wonder if it could also benefit from batching
>> if we find that pages are actually contiguous.
>>
>> That would require looking up multiple pages at once. vmalloc_to_pages() or sth
>> like that. Surely, doing such an optimized page table walk could be beneficial
>> by itself.
>
> Interesting idea. For the general case where we only have struct page
> pointers, we'd need physical contiguity detection and a batched page
> table walk as you described. But looking at init_vmalloc_pages()
> specifically, it already has the vmalloc virtual address which is
> contiguous, so can we just do following and potentially skip the
> vmalloc_to_page() walk entirely:
>
> clear_pages(kasan_reset_tag((void *)start), size >> PAGE_SHIFT);
>
> What do you think? would this simpler approach work
> , or am I missing something?
Good question. :)
That way you'd be operating on the vmalloc address range, not on the direct map.
Is the vmalloc address range guaranteed to be writable at that point?
--
Cheers,
David
next prev parent reply other threads:[~2026-04-28 7:06 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-22 10:26 [PATCH v3] mm/page_alloc: replace kernel_init_pages() with batch page clearing Hrushikesh Salunke
2026-04-22 18:25 ` David Hildenbrand (Arm)
2026-04-23 5:09 ` Salunke, Hrushikesh
2026-04-23 10:13 ` David Hildenbrand (Arm)
2026-04-23 11:12 ` Andrew Morton
2026-04-24 8:42 ` Salunke, Hrushikesh
2026-04-24 8:52 ` David Hildenbrand (Arm)
2026-04-28 3:55 ` Salunke, Hrushikesh
2026-04-28 7:06 ` David Hildenbrand (Arm) [this message]
2026-04-28 8:31 ` Ankur Arora
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b0243130-dd6c-4c9f-bbc2-7458aa0615c9@kernel.org \
--to=david@kernel.org \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=ankur.a.arora@oracle.com \
--cc=bharata@amd.com \
--cc=hannes@cmpxchg.org \
--cc=hsalunke@amd.com \
--cc=jackmanb@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ljs@kernel.org \
--cc=mhocko@suse.com \
--cc=rkodsara@amd.com \
--cc=rppt@kernel.org \
--cc=shivankg@amd.com \
--cc=surenb@google.com \
--cc=vbabka@kernel.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox