From: SeongJae Park <sj@kernel.org>
To: Jiayuan Chen <jiayuan.chen@linux.dev>
Cc: SeongJae Park <sj@kernel.org>,
damon@lists.linux.dev, Jiayuan Chen <jiayuan.chen@shopee.com>,
Andrew Morton <akpm@linux-foundation.org>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 2/2] mm/damon/paddr: prefetch struct page of the next region
Date: Thu, 23 Apr 2026 18:42:45 -0700 [thread overview]
Message-ID: <20260424014245.1124-1-sj@kernel.org> (raw)
In-Reply-To: <20260423122340.138880-2-jiayuan.chen@linux.dev>
On Thu, 23 Apr 2026 20:23:37 +0800 Jiayuan Chen <jiayuan.chen@linux.dev> wrote:
> From: Jiayuan Chen <jiayuan.chen@shopee.com>
>
> In paddr mode with large nr_regions (20k+), damon_get_folio() dominates
> kdamond CPU time. perf annotate shows ~98% of its samples on a single
> load: reading page->compound_head for page_folio(). DAMON samples a
> random PFN per region, so the access pattern scatters across the
> vmemmap with no locality that hardware prefetchers can exploit; every
> compound_head load misses to DRAM (~250 cycles).
>
> Issue a software prefetch for the next region's struct page one
> iteration before it is needed. In __damon_pa_prepare_access_check(),
> the next region's sampling_addr is picked eagerly (one iteration ahead
> of where its mkold will run) so the prefetched line is usable, and the
> first region of each epoch is handled with a one-off branch since it
> has no predecessor to have pre-picked its addr. damon_pa_check_accesses()
> just prefetches based on sampling_addr already populated by the
> preceding prepare pass.
>
> Two details worth calling out:
>
> - prefetchw() rather than prefetch(): compound_head (+0x8) and
> _refcount (+0x34) share a 64B cacheline. The subsequent
> folio_try_get() / folio_put() atomics write _refcount, so bringing
> the line in exclusive state avoids a later S->M coherence upgrade.
>
> - pfn_to_page() rather than pfn_to_online_page(): on
> CONFIG_SPARSEMEM_VMEMMAP the former is pure arithmetic
> (vmemmap + pfn), while the latter walks the mem_section table. The
> mem_section lookup itself incurs a DRAM miss for random PFNs, which
> would serialize what is supposed to be a non-blocking hint - an
> earlier attempt that used pfn_to_online_page() saw the prefetch
> path's internal stall dominate perf (~91% skid on the converge
> point after the call). Prefetching an unmapped vmemmap entry is
> safe: the hint is dropped without faulting.
Sashiko raises [1] a concern about this. Could you please reply?
>
> Concurrency: list_next_entry() and &list == &head comparisons are safe
> without locking because kdamond is the sole mutator of the region list
> of its ctx; external threads must go through damon_call() which defers
> execution to the same kdamond thread between sampling iterations
> (see documentation on struct damon_ctx and damon_call()).
>
> Tested with paddr monitoring, max_nr_regions=20000, and stress-ng-vm
> consuming ~90% of memory across 8 workers: kdamond CPU further reduced
> from ~50% to ~40% of one core on top of the earlier damon_rand_fast()
> change.
This patch feels bit complicated. I'm not sure if the performance gain is
really big enough to compensate the added complexity. As I also asked to the
previous patch, could you please share more details about your use case?
I will review the code in detail after the above high level question and
concern are addressed.
[1] https://lore.kernel.org/20260423192534.300CEC2BCB2@smtp.kernel.org
Thanks,
SJ
[...]
next prev parent reply other threads:[~2026-04-24 1:42 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-23 12:23 [PATCH 1/2] mm/damon: introduce damon_rand_fast() for per-ctx PRNG Jiayuan Chen
2026-04-23 12:23 ` [PATCH 2/2] mm/damon/paddr: prefetch struct page of the next region Jiayuan Chen
2026-04-24 1:42 ` SeongJae Park [this message]
2026-04-24 1:36 ` [PATCH 1/2] mm/damon: introduce damon_rand_fast() for per-ctx PRNG SeongJae Park
2026-04-24 2:29 ` Jiayuan Chen
2026-04-24 15:11 ` SeongJae Park
2026-04-25 3:36 ` Jiayuan Chen
2026-04-25 15:59 ` SeongJae Park
2026-04-26 5:50 ` Jiayuan Chen
2026-04-26 17:33 ` SeongJae Park
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260424014245.1124-1-sj@kernel.org \
--to=sj@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=damon@lists.linux.dev \
--cc=jiayuan.chen@linux.dev \
--cc=jiayuan.chen@shopee.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox