Linux-mm Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: "David Hildenbrand (Arm)" <david@kernel.org>
To: Hrushikesh Salunke <hsalunke@amd.com>,
	akpm@linux-foundation.org, ljs@kernel.org,
	Liam.Howlett@oracle.com, vbabka@kernel.org, rppt@kernel.org,
	surenb@google.com, mhocko@suse.com, jackmanb@google.com,
	hannes@cmpxchg.org, ziy@nvidia.com
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	rkodsara@amd.com, bharata@amd.com, ankur.a.arora@oracle.com,
	shivankg@amd.com
Subject: Re: [PATCH v4] mm/page_alloc: replace kernel_init_pages() with batch page clearing
Date: Tue, 12 May 2026 10:58:56 +0200	[thread overview]
Message-ID: <d964c609-cd40-45ae-8e9c-6c6ed28f3e4f@kernel.org> (raw)
In-Reply-To: <20260504063942.553438-1-hsalunke@amd.com>

On 5/4/26 08:39, Hrushikesh Salunke wrote:
> When init_on_alloc is enabled, kernel_init_pages() clears every page
> one at a time via clear_highpage_kasan_tagged(), which incurs per-page
> kmap_local_page()/kunmap_local() overhead and prevents the architecture
> clearing primitive from operating on contiguous ranges.
> 
> Introduce clear_highpages_kasan_tagged() as a static batch clearing
> helper in page_alloc.c that calls clear_pages() for the full contiguous
> range on !HIGHMEM systems, bypassing the per-page kmap overhead and
> allowing a single invocation of the arch clearing primitive across the
> entire allocation. The HIGHMEM path falls back to per-page clearing
> since those pages require kmap.
> 
> Replace kernel_init_pages() with direct calls to the new helper, as it
> becomes a trivial wrapper.
> 
> Allocating 8192 x 2MB HugeTLB pages (16GB) with init_on_alloc=1:
> 
>   Before: 0.445s
>   After:  0.166s  (-62.7%, 2.68x faster)
> 
> Kernel time (sys) reduction per workload with init_on_alloc=1:
> 
>   Workload            Before       After       Change
>   Graph500 64C128T    30m 41.8s    15m 14.8s   -50.3%
>   Graph500 16C32T     15m 56.7s     9m 43.7s   -39.0%
>   Pagerank 32T         1m 58.5s     1m 12.8s   -38.5%
>   Pagerank 128T        2m 36.3s     1m 40.4s   -35.7%
> 
> Signed-off-by: Hrushikesh Salunke <hsalunke@amd.com>
> Acked-by: Vlastimil Babka (SUSE) <vbabka@kernel.org>
> Acked-by: Zi Yan <ziy@nvidia.com>
> Acked-by: Pankaj Gupta <pankaj.gupta@amd.com>
> ---

Acked-by: David Hildenbrand (Arm) <david@kernel.org>

-- 
Cheers,

David


      reply	other threads:[~2026-05-12  8:59 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-05-04  6:39 [PATCH v4] mm/page_alloc: replace kernel_init_pages() with batch page clearing Hrushikesh Salunke
2026-05-12  8:58 ` David Hildenbrand (Arm) [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d964c609-cd40-45ae-8e9c-6c6ed28f3e4f@kernel.org \
    --to=david@kernel.org \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=ankur.a.arora@oracle.com \
    --cc=bharata@amd.com \
    --cc=hannes@cmpxchg.org \
    --cc=hsalunke@amd.com \
    --cc=jackmanb@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=ljs@kernel.org \
    --cc=mhocko@suse.com \
    --cc=rkodsara@amd.com \
    --cc=rppt@kernel.org \
    --cc=shivankg@amd.com \
    --cc=surenb@google.com \
    --cc=vbabka@kernel.org \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox