From: "David Hildenbrand (Arm)" <david@kernel.org>
To: "Salunke, Hrushikesh" <hsalunke@amd.com>,
"Vlastimil Babka (SUSE)" <vbabka@kernel.org>,
akpm@linux-foundation.org, surenb@google.com, mhocko@suse.com,
jackmanb@google.com, hannes@cmpxchg.org, ziy@nvidia.com
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
rkodsara@amd.com, bharata@amd.com, ankur.a.arora@oracle.com,
shivankg@amd.com
Subject: Re: [PATCH] mm/page_alloc: use batch page clearing in kernel_init_pages()
Date: Wed, 8 Apr 2026 12:53:41 +0200 [thread overview]
Message-ID: <3f5d6955-e202-44dd-b490-863b7193a0c1@kernel.org> (raw)
In-Reply-To: <4e8c218b-ac5e-4674-9e1e-acf750f0a5c8@amd.com>
On 4/8/26 12:44, Salunke, Hrushikesh wrote:
>
> On 08-04-2026 15:17, Vlastimil Babka (SUSE) wrote:
>
>> Caution: This message originated from an External Source. Use proper caution when opening attachments, clicking links, or responding.
>>
>>
>> On 4/8/26 11:24, Hrushikesh Salunke wrote:
>>> When init_on_alloc is enabled, kernel_init_pages() clears every page
>>> one at a time, calling clear_page() per page. This is unnecessarily
>>> slow for large contiguous allocations (mTHPs, HugeTLB) that dominate
>>> real workloads.
>>>
>>> On 64-bit (!HIGHMEM) systems, switch to clearing pages in batch via
>>> clear_pages(), bypassing the per-page kmap_local_page()/kunmap_local()
>>> overhead and allowing the arch clearing primitive to operate on the full
>>> contiguous range in a single invocation. The batch size is the full
>>> allocation when the preempt model is preemptible (preemption points are
>>> implicit), or PROCESS_PAGES_NON_PREEMPT_BATCH otherwise, with
>>> cond_resched() between batches to limit scheduling latency under
>>> cooperative preemption.
>>>
>>> The HIGHMEM path is kept as-is since those pages require kmap.
>>>
>>> Allocating 8192 x 2MB HugeTLB pages (16GB) with init_on_alloc=1:
>>>
>>> Before: 0.445s
>>> After: 0.166s (-62.7%, 2.68x faster)
>>>
>>> Kernel time (sys) reduction per workload with init_on_alloc=1:
>>>
>>> Workload Before After Change
>>> Graph500 64C128T 30m 41.8s 15m 14.8s -50.3%
>>> Graph500 16C32T 15m 56.7s 9m 43.7s -39.0%
>>> Pagerank 32T 1m 58.5s 1m 12.8s -38.5%
>>> Pagerank 128T 2m 36.3s 1m 40.4s -35.7%
>>>
>>> Signed-off-by: Hrushikesh Salunke <hsalunke@amd.com>
>>> ---
>>> base commit: 1a2fbbe3653f0ebb24af9b306a8a968287344a35
>> Any way to reuse the code added by [1], e.g. clear_user_highpages()?
>>
>> [1]
>> https://lore.kernel.org/linux-mm/20250917152418.4077386-1-ankur.a.arora@oracle.com/
>
> Thanks for the review. Sure, I will check if code reuse is possible.
> Meanwhile I found another issue with the current patch.
>
> kernel_init_pages() runs inside the allocator (post_alloc_hook and
> __free_pages_prepare), so it inherits whatever context the caller is in.
> Testing with CONFIG_DEBUG_ATOMIC_SLEEP=y and CONFIG_PROVE_LOCKING=y, I
> hit this during exit_group() -> exit_mmap() -> __zap_vma_range, where a
> page allocation happens while the PTE lock and RCU read lock are held,
> making the cond_resched() in the clearing loop illegal:
>
> [ 1997.353228] BUG: sleeping function called from invalid context at mm/page_alloc.c:1235
> [ 1997.353433] in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 19725, name: bash
> [ 1997.353572] preempt_count: 1, expected: 0
> [ 1997.353706] RCU nest depth: 1, expected: 0
> [ 1997.353837] 3 locks held by bash/19725:
> [ 1997.353839] #0: ff38cd415971e540 (&mm->mmap_lock){++++}-{4:4}, at: exit_mmap+0x6e/0x430
> [ 1997.353850] #1: ffffffffb03d6f60 (rcu_read_lock){....}-{1:3}, at: __pte_offset_map+0x2c/0x220
> [ 1997.353855] #2: ff38cd410deb4618 (ptlock_ptr(ptdesc)#2){+.+.}-{3:3}, at: pte_offset_map_lock+0x92/0x170
> [ 1997.353868] Call Trace:
> [ 1997.353870] <TASK>
> [ 1997.353873] dump_stack_lvl+0x91/0xb0
> [ 1997.353877] __might_resched+0x15f/0x290
> [ 1997.353882] kernel_init_pages+0x4b/0xa0
> [ 1997.353886] get_page_from_freelist+0x406/0x1e60
> [ 1997.353895] __alloc_frozen_pages_noprof+0x1d8/0x1730
> [ 1997.353912] alloc_pages_mpol+0xa4/0x190
> [ 1997.353917] alloc_pages_noprof+0x59/0xd0
> [ 1997.353919] get_free_pages_noprof+0x11/0x40
> [ 1997.353921] __tlb_remove_folio_pages_size.isra.0+0x7f/0xe0
> [ 1997.353923] __zap_vma_range+0x1bbd/0x1f40
> [ 1997.353931] unmap_vmas+0xd9/0x1d0
> [ 1997.353934] exit_mmap+0x10a/0x430
> [ 1997.353943] __mmput+0x3d/0x130
> [ 1997.353947] do_exit+0x2a7/0xae0
> [ 1997.353951] do_group_exit+0x36/0xa0
> [ 1997.353953] __x64_sys_exit_group+0x18/0x20
> [ 1997.353959] do_syscall_64+0xe1/0x710
> [ 1997.353990] entry_SYSCALL_64_after_hwframe+0x76/0x7e
> [ 1997.354003] </TASK>
>
> This also means clear_contig_highpages() can't be directly reused here
> since it has an unconditional might_sleep() + cond_resched(). I'll look
> into this. Any suggestions on the right way to handle cond_resched()
> in a context that may or may not be atomic?
clear_contig_highpages() is prepared to handle arbitrary sizes,
including 1 GiB chunks or even larger.
The question is whether you even have to use
PROCESS_PAGES_NON_PREEMPT_BATCH given that we cannot trigger a manual
resched either way (and the assumption is that memory we are clearing is
not that big. Well, on arm64 it can still be 512 MiB).
So I wonder what happens when you just use clear_pages().
Likely you should provide a clear_highpages_kasan_tagged() and a
clear_highpages() ?
So you would be calling clear_highpages_kasan_tagged() here that would
just default to calling clear_highpages() unless kasan applies etc.
--
Cheers,
David
next prev parent reply other threads:[~2026-04-08 10:53 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-08 9:24 [PATCH] mm/page_alloc: use batch page clearing in kernel_init_pages() Hrushikesh Salunke
2026-04-08 9:47 ` Vlastimil Babka (SUSE)
2026-04-08 10:44 ` Salunke, Hrushikesh
2026-04-08 10:53 ` David Hildenbrand (Arm) [this message]
2026-04-08 11:16 ` Raghavendra K T
2026-04-08 16:24 ` Raghavendra K T
2026-04-08 15:32 ` Andrew Morton
2026-04-09 8:55 ` Salunke, Hrushikesh
2026-04-09 9:00 ` David Hildenbrand (Arm)
2026-04-09 9:28 ` Salunke, Hrushikesh
2026-04-09 12:02 ` Michal Hocko
2026-04-08 11:32 ` [syzbot ci] " syzbot ci
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3f5d6955-e202-44dd-b490-863b7193a0c1@kernel.org \
--to=david@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=ankur.a.arora@oracle.com \
--cc=bharata@amd.com \
--cc=hannes@cmpxchg.org \
--cc=hsalunke@amd.com \
--cc=jackmanb@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=rkodsara@amd.com \
--cc=shivankg@amd.com \
--cc=surenb@google.com \
--cc=vbabka@kernel.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.