All of lore.kernel.org
 help / color / mirror / Atom feed
From: Raghavendra K T <rkodsara@amd.com>
To: "Salunke, Hrushikesh" <hsalunke@amd.com>,
	"Vlastimil Babka (SUSE)" <vbabka@kernel.org>,
	akpm@linux-foundation.org, surenb@google.com, mhocko@suse.com,
	jackmanb@google.com, hannes@cmpxchg.org, ziy@nvidia.com
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	bharata@amd.com, ankur.a.arora@oracle.com, shivankg@amd.com,
	David Hildenbrand <david@redhat.com>
Subject: Re: [PATCH] mm/page_alloc: use batch page clearing in kernel_init_pages()
Date: Wed, 8 Apr 2026 16:46:48 +0530	[thread overview]
Message-ID: <aceb0077-b206-4484-b102-07de537dcb1e@amd.com> (raw)
In-Reply-To: <4e8c218b-ac5e-4674-9e1e-acf750f0a5c8@amd.com>



On 4/8/2026 4:14 PM, Salunke, Hrushikesh wrote:
> [Some people who received this message don't often get email from hsalunke@amd.com. Learn why this is important at https://aka.ms/LearnAboutSenderIdentification ]
> 
> On 08-04-2026 15:17, Vlastimil Babka (SUSE) wrote:
> 
>> Caution: This message originated from an External Source. Use proper caution when opening attachments, clicking links, or responding.
>>
>>
>> On 4/8/26 11:24, Hrushikesh Salunke wrote:
>>> When init_on_alloc is enabled, kernel_init_pages() clears every page
>>> one at a time, calling clear_page() per page.  This is unnecessarily
>>> slow for large contiguous allocations (mTHPs, HugeTLB) that dominate
>>> real workloads.
>>>
>>> On 64-bit (!HIGHMEM) systems, switch to clearing pages in batch via
>>> clear_pages(), bypassing the per-page kmap_local_page()/kunmap_local()
>>> overhead and allowing the arch clearing primitive to operate on the full
>>> contiguous range in a single invocation.  The batch size is the full
>>> allocation when the preempt model is preemptible (preemption points are
>>> implicit), or PROCESS_PAGES_NON_PREEMPT_BATCH otherwise, with
>>> cond_resched() between batches to limit scheduling latency under
>>> cooperative preemption.
>>>
>>> The HIGHMEM path is kept as-is since those pages require kmap.
>>>
>>> Allocating 8192 x 2MB HugeTLB pages (16GB) with init_on_alloc=1:
>>>
>>>    Before: 0.445s
>>>    After:  0.166s  (-62.7%, 2.68x faster)
>>>
>>> Kernel time (sys) reduction per workload with init_on_alloc=1:
>>>
>>>    Workload            Before       After       Change
>>>    Graph500 64C128T    30m 41.8s    15m 14.8s   -50.3%
>>>    Graph500 16C32T     15m 56.7s     9m 43.7s   -39.0%
>>>    Pagerank 32T         1m 58.5s     1m 12.8s   -38.5%
>>>    Pagerank 128T        2m 36.3s     1m 40.4s   -35.7%
>>>
>>> Signed-off-by: Hrushikesh Salunke <hsalunke@amd.com>
>>> ---
>>> base commit: 1a2fbbe3653f0ebb24af9b306a8a968287344a35
>> Any way to reuse the code added by [1], e.g. clear_user_highpages()?
>>
>> [1]
>> https://lore.kernel.org/linux-mm/20250917152418.4077386-1-ankur.a.arora@oracle.com/
> 
> Thanks for the review. Sure, I will check if code reuse is possible.
> Meanwhile I found another issue with the current patch.
> 
> kernel_init_pages() runs inside the allocator (post_alloc_hook and
> __free_pages_prepare), so it inherits whatever context the caller is in.
> Testing with CONFIG_DEBUG_ATOMIC_SLEEP=y and CONFIG_PROVE_LOCKING=y, I
> hit this during exit_group() -> exit_mmap() -> __zap_vma_range, where a
> page allocation happens while the PTE lock and RCU read lock are held,
> making the cond_resched() in the clearing loop illegal:
> 
> [ 1997.353228] BUG: sleeping function called from invalid context at mm/page_alloc.c:1235
> [ 1997.353433] in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 19725, name: bash
> [ 1997.353572] preempt_count: 1, expected: 0
> [ 1997.353706] RCU nest depth: 1, expected: 0
> [ 1997.353837] 3 locks held by bash/19725:
> [ 1997.353839]  #0: ff38cd415971e540 (&mm->mmap_lock){++++}-{4:4}, at: exit_mmap+0x6e/0x430
> [ 1997.353850]  #1: ffffffffb03d6f60 (rcu_read_lock){....}-{1:3}, at: __pte_offset_map+0x2c/0x220
> [ 1997.353855]  #2: ff38cd410deb4618 (ptlock_ptr(ptdesc)#2){+.+.}-{3:3}, at: pte_offset_map_lock+0x92/0x170
> [ 1997.353868] Call Trace:
> [ 1997.353870]  <TASK>
> [ 1997.353873]  dump_stack_lvl+0x91/0xb0
> [ 1997.353877]  __might_resched+0x15f/0x290
> [ 1997.353882]  kernel_init_pages+0x4b/0xa0
> [ 1997.353886]  get_page_from_freelist+0x406/0x1e60
> [ 1997.353895]  __alloc_frozen_pages_noprof+0x1d8/0x1730
> [ 1997.353912]  alloc_pages_mpol+0xa4/0x190
> [ 1997.353917]  alloc_pages_noprof+0x59/0xd0
> [ 1997.353919]  get_free_pages_noprof+0x11/0x40
> [ 1997.353921]  __tlb_remove_folio_pages_size.isra.0+0x7f/0xe0
> [ 1997.353923]  __zap_vma_range+0x1bbd/0x1f40
> [ 1997.353931]  unmap_vmas+0xd9/0x1d0
> [ 1997.353934]  exit_mmap+0x10a/0x430
> [ 1997.353943]  __mmput+0x3d/0x130
> [ 1997.353947]  do_exit+0x2a7/0xae0
> [ 1997.353951]  do_group_exit+0x36/0xa0
> [ 1997.353953]  __x64_sys_exit_group+0x18/0x20
> [ 1997.353959]  do_syscall_64+0xe1/0x710
> [ 1997.353990]  entry_SYSCALL_64_after_hwframe+0x76/0x7e
> [ 1997.354003]  </TASK>
> 
> This also means clear_contig_highpages() can't be directly reused here
> since it has an unconditional might_sleep() + cond_resched(). I'll look
> into this. Any suggestions on the right way to handle cond_resched()
> in a context that may or may not be atomic?
> 
> Thanks,
> Hrushikesh
> 
>>>   mm/page_alloc.c | 19 +++++++++++++++++--
>>>   1 file changed, 17 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>>> index b1c5430cad4e..178cbebadd50 100644
>>> --- a/mm/page_alloc.c
>>> +++ b/mm/page_alloc.c
>>> @@ -1224,8 +1224,23 @@ static void kernel_init_pages(struct page *page, int numpages)
>>>
>>>        /* s390's use of memset() could override KASAN redzones. */
>>>        kasan_disable_current();
>>> -     for (i = 0; i < numpages; i++)
>>> -             clear_highpage_kasan_tagged(page + i);
>>> +
>>> +     if (!IS_ENABLED(CONFIG_HIGHMEM)) {
>>> +             void *addr = kasan_reset_tag(page_address(page));
>>> +             unsigned int unit = preempt_model_preemptible() ?
>>> +                                     numpages : PROCESS_PAGES_NON_PREEMPT_BATCH;
>>> +             int count;
>>> +
>>> +             for (i = 0; i < numpages; i += count) {
>>> +                     cond_resched();

Just thinking,
Considering that for preemptible kernel/preempt_auto preempt_count()
knows about preemption points to decide where it can preempt,

and

for non_preemptible kernel and voluntary kernel it is safe to do
preemption at PROCESS_PAGES_NON_PREEMPT_BATCH granularity

do we need cond_resched() here ?

Let me know if I am missing something.

>>> +                     count = min_t(int, unit, numpages - i);
>>> +                     clear_pages(addr + (i << PAGE_SHIFT), count);
>>> +             }
>>> +     } else {
>>> +             for (i = 0; i < numpages; i++)
>>> +                     clear_highpage_kasan_tagged(page + i);
>>> +     }
>>> +
>>>        kasan_enable_current();
>>>   }
>>>

Regards
- Raghu



  parent reply	other threads:[~2026-04-08 11:17 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-08  9:24 [PATCH] mm/page_alloc: use batch page clearing in kernel_init_pages() Hrushikesh Salunke
2026-04-08  9:47 ` Vlastimil Babka (SUSE)
2026-04-08 10:44   ` Salunke, Hrushikesh
2026-04-08 10:53     ` David Hildenbrand (Arm)
2026-04-08 11:16     ` Raghavendra K T [this message]
2026-04-08 16:24       ` Raghavendra K T
2026-04-08 15:32     ` Andrew Morton
2026-04-09  8:55       ` Salunke, Hrushikesh
2026-04-09  9:00         ` David Hildenbrand (Arm)
2026-04-09  9:28           ` Salunke, Hrushikesh
2026-04-09 12:02           ` Michal Hocko
2026-04-08 11:32 ` [syzbot ci] " syzbot ci

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aceb0077-b206-4484-b102-07de537dcb1e@amd.com \
    --to=rkodsara@amd.com \
    --cc=akpm@linux-foundation.org \
    --cc=ankur.a.arora@oracle.com \
    --cc=bharata@amd.com \
    --cc=david@redhat.com \
    --cc=hannes@cmpxchg.org \
    --cc=hsalunke@amd.com \
    --cc=jackmanb@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=shivankg@amd.com \
    --cc=surenb@google.com \
    --cc=vbabka@kernel.org \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.