From: sashiko-bot@kernel.org
To: "SeongJae Park" <sj@kernel.org>
Cc: damon@lists.linux.dev
Subject: Re: [RFC PATCH 3/7] mm/damon/lru_sort: cover all system rams
Date: Wed, 15 Apr 2026 02:36:28 +0000 [thread overview]
Message-ID: <20260415023628.DDFC8C2BCB5@smtp.kernel.org> (raw)
In-Reply-To: <20260415012048.76508-4-sj@kernel.org>
Sashiko AI review found 3 potential issue(s):
- [High] Monitoring the bounding box of all System RAM resources encompasses massive physical address holes, fundamentally breaking DAMON's sampling-based access check on sparse memory systems.
- [High] Applying schemes to the newly-enlarged bounding box can cause a soft lockup panic due to unbounded page iteration without yielding.
- [Low] Misspelled 'phyiscal' in module parameter documentation.
--
commit 11d2e4765abf8999777a27c53c055f1c43ca53bf
Author: SeongJae Park <sj@kernel.org>
mm/damon/lru_sort: cover all system rams
This commit updates DAMON_LRU_SORT to cover all system RAM instead of
just the biggest System RAM resource. The default monitor region start
and end parameter descriptions and initializations are modified to
reflect the entire physical memory space.
> diff --git a/mm/damon/lru_sort.c b/mm/damon/lru_sort.c
> index 554559d729760..7f8c26890ffb2 100644
> --- a/mm/damon/lru_sort.c
> +++ b/mm/damon/lru_sort.c
> @@ -140,7 +140,8 @@ DEFINE_DAMON_MODULES_MON_ATTRS_PARAMS(damon_lru_sort_mon_attrs);
> * Start of the target memory region in physical address.
> *
> * The start physical address of memory region that DAMON_LRU_SORT will do work
> - * against. By default, biggest System RAM is used as the region.
> + * against. By default, the system's entire phyiscal memory is used as the
This isn't a bug, but there is a typo (phyiscal) here.
> + * region.
> */
> static unsigned long monitor_region_start __read_mostly;
> module_param(monitor_region_start, ulong, 0600);
> @@ -149,7 +150,8 @@ module_param(monitor_region_start, ulong, 0600);
> * End of the target memory region in physical address.
> *
> * The end physical address of memory region that DAMON_LRU_SORT will do work
> - * against. By default, biggest System RAM is used as the region.
> + * against. By default, the system's entire phyiscal memory is used as the
There is another typo (phyiscal) here as well.
> + * region.
> */
> static unsigned long monitor_region_end __read_mostly;
> module_param(monitor_region_end, ulong, 0600);
[ ... ]
> @@ -336,7 +338,7 @@ static int damon_lru_sort_apply_parameters(void)
> if (err)
> goto out;
>
> - err = damon_set_region_biggest_system_ram_default(param_target,
> + err = damon_set_region_system_rams_default(param_target,
> &monitor_region_start,
> &monitor_region_end,
> param_ctx->addr_unit,
Will monitoring the bounding box of all System RAM resources encompass massive
physical address holes on sparse memory systems?
Because DAMON estimates access frequency by sampling uniformly at random from
the region:
r->sampling_addr = damon_rand(r->ar.start, r->ar.end);
If the bounding box spans across terabytes of address gaps between NUMA nodes,
would the overwhelming majority of samples hit unpopulated holes?
Since sampling a hole returns no folio, does this artificially dilute the
apparent access frequency of valid hot RAM within the region and misidentify
it as cold?
Can this also cause soft lockups during scheme application?
When applying a scheme, the region is iterated page-by-page. For unpopulated
gaps, the folio check is skipped and the address is incremented:
while (addr < damon_pa_phys_addr(r->ar.end, addr_unit)) {
folio = damon_get_folio(PHYS_PFN(addr));
if (damon_pa_invalid_damos_folio(folio, s)) {
addr += PAGE_SIZE;
continue;
}
...
Since there is no cond_resched() inside this tight loop, and because valid
folios are not found (meaning quota variables don't increment to abort early),
could iterating over terabytes of unpopulated gaps trigger the soft lockup
watchdog on PREEMPT_NONE or PREEMPT_VOLUNTARY kernels?
--
Sashiko AI review · https://sashiko.dev/#/patchset/20260415012048.76508-1-sj@kernel.org?part=3
next prev parent reply other threads:[~2026-04-15 2:36 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-15 1:20 [RFC PATCH 0/7] mm/damon/reclaim,lru_sort: monitor all system rams by default SeongJae Park
2026-04-15 1:20 ` [RFC PATCH 1/7] mm/damon: introduce damon_set_region_system_rams_default() SeongJae Park
2026-04-15 1:35 ` sashiko-bot
2026-04-15 3:36 ` SeongJae Park
2026-04-15 1:20 ` [RFC PATCH 2/7] mm/damon/reclaim: cover all system rams SeongJae Park
2026-04-15 1:58 ` sashiko-bot
2026-04-15 3:47 ` SeongJae Park
2026-04-15 1:20 ` [RFC PATCH 3/7] mm/damon/lru_sort: " SeongJae Park
2026-04-15 2:36 ` sashiko-bot [this message]
2026-04-15 3:52 ` SeongJae Park
2026-04-15 1:20 ` [RFC PATCH 4/7] mm/damon/core: remove damon_set_region_biggest_system_ram_default() SeongJae Park
2026-04-15 1:20 ` [RFC PATCH 5/7] mm/damon/stat: use damon_set_region_system_rams_default() SeongJae Park
2026-04-15 1:20 ` [RFC PATCH 6/7] Docs/admin-guide/mm/damon/reclaim: update for entire memory monitoring SeongJae Park
2026-04-15 1:20 ` [RFC PATCH 7/7] Docs/admin-guide/mm/damon/lru_sort: " SeongJae Park
2026-04-15 2:57 ` sashiko-bot
2026-04-15 3:53 ` SeongJae Park
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260415023628.DDFC8C2BCB5@smtp.kernel.org \
--to=sashiko-bot@kernel.org \
--cc=damon@lists.linux.dev \
--cc=sashiko@lists.linux.dev \
--cc=sj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox