* [RFC PATCH 0/7] mm/damon/reclaim,lru_sort: monitor all system rams by default
@ 2026-04-15 1:20 SeongJae Park
2026-04-15 1:20 ` [RFC PATCH 1/7] mm/damon: introduce damon_set_region_system_rams_default() SeongJae Park
` (6 more replies)
0 siblings, 7 replies; 16+ messages in thread
From: SeongJae Park @ 2026-04-15 1:20 UTC (permalink / raw)
Cc: SeongJae Park, Liam R. Howlett, Andrew Morton, David Hildenbrand,
Jonathan Corbet, Lorenzo Stoakes, Michal Hocko, Mike Rapoport,
Shuah Khan, Suren Baghdasaryan, Vlastimil Babka, damon, linux-doc,
linux-kernel, linux-mm
DAMON_RECLAIM and DAMON_LRU_SORT set the biggest 'System RAM' resource
of the system as the default monitoring target address range. The main
intention behind the design is to minimize the overhead coming from
monitoring of non-System RAM areas.
This could result in an odd setup when there are multiple discrete
System RAMs of considerable sizes. For example, there are System RAMs
each having 500 GiB size. In this case, only the first 500 GiB will be
set as the monitoring region by default. This is particularly common on
NUMA systems. Hence the modules allow users to set the monitoring
target address range using the module parameters if the default setup
doesn't work for them. In other words, the current design trades ease
of setup for lower overhead.
However, because DAMON utilizes the sampling based access check and the
adaptive regions adjustment mechanisms, the overhead from the monitoring
of non-System RAM areas should be negligible in most setups. Meanwhile,
the setup complexity is causing real headaches for users who need to run
those modules on various types of systems. That is, the current
tradeoff is not a good deal.
Set the physical address range that can cover all System RAM areas of
the system as the default monitoring regions for DAMON_RECLAIM and
DAMON_LRU_SORT.
Technically speaking, this is changing documented behavior. However, it
makes no sense to believe there is a real use case that really depends
on the old weird default behavior. If the old default behavior was
working for them in the reasonable way, this change will only add a
negligible amount of monitoring overhead. If it didn't work, the users
may already be using manual monitoring regions setup, and they will not
be affected by this change.
Patches Sequence
================
Patch 1 introduces a new core function that will be used for the new
default monitoring target region setup. Patch 2 and 3 update
DAMON_RECLAIM and DAMON_LRU_SORT to use the new function instead of the
old one, respectively. Patch 4 removes the old core function that was
replaced by the new one, as there is no more user of it. Patch 5
updates DAMON_STAT to use the new one instead of its in-house
nearly-duplicate self implementation of the functionality. Finally
patches 6 and 7 update the DAMON_RECLAIM and DAMON_LRU_SORT user
documentation for the new behaviors, respectively.
SeongJae Park (7):
mm/damon: introduce damon_set_region_system_rams_default()
mm/damon/reclaim: cover all system rams
mm/damon/lru_sort: cover all system rams
mm/damon/core: remove damon_set_region_biggest_system_ram_default()
mm/damon/stat: use damon_set_region_system_rams_default()
Docs/admin-guide/mm/damon/reclaim: update for entire memory monitoring
Docs/admin-guide/mm/damon/lru_sort: update for entire memory
monitoring
.../admin-guide/mm/damon/lru_sort.rst | 6 ++-
.../admin-guide/mm/damon/reclaim.rst | 6 ++-
include/linux/damon.h | 2 +-
mm/damon/core.c | 49 +++++++++--------
mm/damon/lru_sort.c | 8 +--
mm/damon/reclaim.c | 14 ++---
mm/damon/stat.c | 53 ++-----------------
7 files changed, 50 insertions(+), 88 deletions(-)
base-commit: 11bcd10460e9446785fc04deb5d175806a00400b
--
2.47.3
^ permalink raw reply [flat|nested] 16+ messages in thread
* [RFC PATCH 1/7] mm/damon: introduce damon_set_region_system_rams_default()
2026-04-15 1:20 [RFC PATCH 0/7] mm/damon/reclaim,lru_sort: monitor all system rams by default SeongJae Park
@ 2026-04-15 1:20 ` SeongJae Park
2026-04-15 1:35 ` sashiko-bot
2026-04-15 1:20 ` [RFC PATCH 2/7] mm/damon/reclaim: cover all system rams SeongJae Park
` (5 subsequent siblings)
6 siblings, 1 reply; 16+ messages in thread
From: SeongJae Park @ 2026-04-15 1:20 UTC (permalink / raw)
Cc: SeongJae Park, Andrew Morton, damon, linux-kernel, linux-mm
damon_set_region_biggest_system_ram_default() sets the monitoring target
region as the caller requested. If the caller didn't specify the
region, it finds the biggest System RAM of the system and sets it as the
target region. When there are more than one considerable size of System
RAM resources in the system, the default target setup makes no sense.
Introduce a variant, namely damon_set_region_system_rams_default(). It
sets a physical address range that covers all System RAM resources as
the default target region.
Signed-off-by: SeongJae Park <sj@kernel.org>
---
include/linux/damon.h | 5 +++
mm/damon/core.c | 77 ++++++++++++++++++++++++++++++++++++++++---
2 files changed, 77 insertions(+), 5 deletions(-)
diff --git a/include/linux/damon.h b/include/linux/damon.h
index 5fb1dc585658b..c4cdaa01ea530 100644
--- a/include/linux/damon.h
+++ b/include/linux/damon.h
@@ -1007,6 +1007,11 @@ int damon_kdamond_pid(struct damon_ctx *ctx);
int damon_call(struct damon_ctx *ctx, struct damon_call_control *control);
int damos_walk(struct damon_ctx *ctx, struct damos_walk_control *control);
+int damon_set_region_system_rams_default(struct damon_target *t,
+ unsigned long *start, unsigned long *end,
+ unsigned long addr_unit,
+ unsigned long min_region_sz);
+
int damon_set_region_biggest_system_ram_default(struct damon_target *t,
unsigned long *start, unsigned long *end,
unsigned long addr_unit,
diff --git a/mm/damon/core.c b/mm/damon/core.c
index 48633da449104..3cef74ba7f0f4 100644
--- a/mm/damon/core.c
+++ b/mm/damon/core.c
@@ -3186,14 +3186,20 @@ static int kdamond_fn(void *data)
return 0;
}
-static int walk_system_ram(struct resource *res, void *arg)
+struct damon_system_ram_range_walk_arg {
+ bool walked;
+ struct resource res;
+};
+
+static int damon_system_ram_walk_fn(struct resource *res, void *arg)
{
- struct resource *a = arg;
+ struct damon_system_ram_range_walk_arg *a = arg;
- if (resource_size(a) < resource_size(res)) {
- a->start = res->start;
- a->end = res->end;
+ if (!a->walked) {
+ a->walked = true;
+ a->res.start = res->start;
}
+ a->res.end = res->end;
return 0;
}
@@ -3210,6 +3216,67 @@ static unsigned long damon_res_to_core_addr(resource_size_t ra,
return ra / addr_unit;
}
+static bool damon_find_system_rams_range(unsigned long *start,
+ unsigned long *end, unsigned long addr_unit)
+{
+ struct damon_system_ram_range_walk_arg arg = {};
+
+ walk_system_ram_res(0, -1, &arg, damon_system_ram_walk_fn);
+ if (!arg.walked)
+ return false;
+ *start = damon_res_to_core_addr(arg.res.start, addr_unit);
+ *end = damon_res_to_core_addr(arg.res.end + 1, addr_unit);
+ if (*end <= *start)
+ return false;
+ return true;
+}
+
+/**
+ * damon_set_region_system_rams_default() - Set the region of the given
+ * monitoring target as requested, or to cover all 'System RAM' resources.
+ * @t: The monitoring target to set the region.
+ * @start: The pointer to the start address of the region.
+ * @end: The pointer to the end address of the region.
+ * @addr_unit: The address unit for the damon_ctx of @t.
+ * @min_region_sz: Minimum region size.
+ *
+ * This function sets the region of @t as requested by @start and @end. If the
+ * values of @start and @end are zero, however, this function finds 'System
+ * RAM' resources and sets the region to cover all the resource. In the latter
+ * case, this function saves the start and the end addresseses of the first and
+ * the last resources in @start and @end, respectively.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+int damon_set_region_system_rams_default(struct damon_target *t,
+ unsigned long *start, unsigned long *end,
+ unsigned long addr_unit, unsigned long min_region_sz)
+{
+ struct damon_addr_range addr_range;
+
+ if (*start > *end)
+ return -EINVAL;
+
+ if (!*start && !*end &&
+ !damon_find_system_rams_range(start, end, addr_unit))
+ return -EINVAL;
+
+ addr_range.start = *start;
+ addr_range.end = *end;
+ return damon_set_regions(t, &addr_range, 1, min_region_sz);
+}
+
+static int walk_system_ram(struct resource *res, void *arg)
+{
+ struct resource *a = arg;
+
+ if (resource_size(a) < resource_size(res)) {
+ a->start = res->start;
+ a->end = res->end;
+ }
+ return 0;
+}
+
/*
* Find biggest 'System RAM' resource and store its start and end address in
* @start and @end, respectively. If no System RAM is found, returns false.
--
2.47.3
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [RFC PATCH 2/7] mm/damon/reclaim: cover all system rams
2026-04-15 1:20 [RFC PATCH 0/7] mm/damon/reclaim,lru_sort: monitor all system rams by default SeongJae Park
2026-04-15 1:20 ` [RFC PATCH 1/7] mm/damon: introduce damon_set_region_system_rams_default() SeongJae Park
@ 2026-04-15 1:20 ` SeongJae Park
2026-04-15 1:58 ` sashiko-bot
2026-04-15 1:20 ` [RFC PATCH 3/7] mm/damon/lru_sort: " SeongJae Park
` (4 subsequent siblings)
6 siblings, 1 reply; 16+ messages in thread
From: SeongJae Park @ 2026-04-15 1:20 UTC (permalink / raw)
Cc: SeongJae Park, Andrew Morton, damon, linux-kernel, linux-mm
DAMON_RECLAIM allows users to set the physical address range to monitor
and do the work on. When users don't explicitly set the range, the
biggest System RAM resource of the system is selected as the monitoring
target address range. The intention was to reduce the overhead from
monitoring non-System RAM areas because monitoring of non-System RAM may
be meaningless. However, because of the sampling based access check and
adaptive regions adjustment, the overhead should be negligible. It
makes more sense to just cover all system rams of the system. Do so.
Signed-off-by: SeongJae Park <sj@kernel.org>
---
mm/damon/reclaim.c | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/mm/damon/reclaim.c b/mm/damon/reclaim.c
index 89998d28628c4..ecfc6b58d07d2 100644
--- a/mm/damon/reclaim.c
+++ b/mm/damon/reclaim.c
@@ -127,7 +127,8 @@ DEFINE_DAMON_MODULES_MON_ATTRS_PARAMS(damon_reclaim_mon_attrs);
* Start of the target memory region in physical address.
*
* The start physical address of memory region that DAMON_RECLAIM will do work
- * against. By default, biggest System RAM is used as the region.
+ * against. By default, the system's entire physical memory is used as the
+ * region.
*/
static unsigned long monitor_region_start __read_mostly;
module_param(monitor_region_start, ulong, 0600);
@@ -136,7 +137,8 @@ module_param(monitor_region_start, ulong, 0600);
* End of the target memory region in physical address.
*
* The end physical address of memory region that DAMON_RECLAIM will do work
- * against. By default, biggest System RAM is used as the region.
+ * against. By default, the system's entire physical memory is used as the
+ * region.
*/
static unsigned long monitor_region_end __read_mostly;
module_param(monitor_region_end, ulong, 0600);
@@ -264,11 +266,9 @@ static int damon_reclaim_apply_parameters(void)
damos_add_filter(scheme, filter);
}
- err = damon_set_region_biggest_system_ram_default(param_target,
- &monitor_region_start,
- &monitor_region_end,
- param_ctx->addr_unit,
- param_ctx->min_region_sz);
+ err = damon_set_region_system_rams_default(param_target,
+ &monitor_region_start, &monitor_region_end,
+ param_ctx->addr_unit, param_ctx->min_region_sz);
if (err)
goto out;
err = damon_commit_ctx(ctx, param_ctx);
--
2.47.3
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [RFC PATCH 3/7] mm/damon/lru_sort: cover all system rams
2026-04-15 1:20 [RFC PATCH 0/7] mm/damon/reclaim,lru_sort: monitor all system rams by default SeongJae Park
2026-04-15 1:20 ` [RFC PATCH 1/7] mm/damon: introduce damon_set_region_system_rams_default() SeongJae Park
2026-04-15 1:20 ` [RFC PATCH 2/7] mm/damon/reclaim: cover all system rams SeongJae Park
@ 2026-04-15 1:20 ` SeongJae Park
2026-04-15 2:36 ` sashiko-bot
2026-04-15 1:20 ` [RFC PATCH 4/7] mm/damon/core: remove damon_set_region_biggest_system_ram_default() SeongJae Park
` (3 subsequent siblings)
6 siblings, 1 reply; 16+ messages in thread
From: SeongJae Park @ 2026-04-15 1:20 UTC (permalink / raw)
Cc: SeongJae Park, Andrew Morton, damon, linux-kernel, linux-mm
DAMON_LRU_SORT allows users to set the physical address range to monitor
and do the work on. When users don't explicitly set the range, the
biggest system ram resource of the system is selected as the monitoring
target address range. The intention was to reduce the overhead from
monitoring non-System RAM areas because monitoring non-System RAM may be
meaningless. However, because of the sampling based access check and
adaptive regions adjustment, the overhead should be negligible. It
makes more sense to just cover all system rams of the system. Do so.
Signed-off-by: SeongJae Park <sj@kernel.org>
---
mm/damon/lru_sort.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/mm/damon/lru_sort.c b/mm/damon/lru_sort.c
index 641af42cc2d1a..48ddfa6369c93 100644
--- a/mm/damon/lru_sort.c
+++ b/mm/damon/lru_sort.c
@@ -139,7 +139,8 @@ DEFINE_DAMON_MODULES_MON_ATTRS_PARAMS(damon_lru_sort_mon_attrs);
* Start of the target memory region in physical address.
*
* The start physical address of memory region that DAMON_LRU_SORT will do work
- * against. By default, biggest System RAM is used as the region.
+ * against. By default, the system's entire phyiscal memory is used as the
+ * region.
*/
static unsigned long monitor_region_start __read_mostly;
module_param(monitor_region_start, ulong, 0600);
@@ -148,7 +149,8 @@ module_param(monitor_region_start, ulong, 0600);
* End of the target memory region in physical address.
*
* The end physical address of memory region that DAMON_LRU_SORT will do work
- * against. By default, biggest System RAM is used as the region.
+ * against. By default, the system's entire phyiscal memory is used as the
+ * region.
*/
static unsigned long monitor_region_end __read_mostly;
module_param(monitor_region_end, ulong, 0600);
@@ -335,7 +337,7 @@ static int damon_lru_sort_apply_parameters(void)
if (err)
goto out;
- err = damon_set_region_biggest_system_ram_default(param_target,
+ err = damon_set_region_system_rams_default(param_target,
&monitor_region_start,
&monitor_region_end,
param_ctx->addr_unit,
--
2.47.3
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [RFC PATCH 4/7] mm/damon/core: remove damon_set_region_biggest_system_ram_default()
2026-04-15 1:20 [RFC PATCH 0/7] mm/damon/reclaim,lru_sort: monitor all system rams by default SeongJae Park
` (2 preceding siblings ...)
2026-04-15 1:20 ` [RFC PATCH 3/7] mm/damon/lru_sort: " SeongJae Park
@ 2026-04-15 1:20 ` SeongJae Park
2026-04-15 1:20 ` [RFC PATCH 5/7] mm/damon/stat: use damon_set_region_system_rams_default() SeongJae Park
` (2 subsequent siblings)
6 siblings, 0 replies; 16+ messages in thread
From: SeongJae Park @ 2026-04-15 1:20 UTC (permalink / raw)
Cc: SeongJae Park, Andrew Morton, damon, linux-kernel, linux-mm
Now nobody is using damon_set_region_biggest_system_ram_default().
Remove it.
Signed-off-by: SeongJae Park <sj@kernel.org>
---
include/linux/damon.h | 5 ----
mm/damon/core.c | 64 -------------------------------------------
2 files changed, 69 deletions(-)
diff --git a/include/linux/damon.h b/include/linux/damon.h
index c4cdaa01ea530..1a4f754ad89be 100644
--- a/include/linux/damon.h
+++ b/include/linux/damon.h
@@ -1012,11 +1012,6 @@ int damon_set_region_system_rams_default(struct damon_target *t,
unsigned long addr_unit,
unsigned long min_region_sz);
-int damon_set_region_biggest_system_ram_default(struct damon_target *t,
- unsigned long *start, unsigned long *end,
- unsigned long addr_unit,
- unsigned long min_region_sz);
-
#endif /* CONFIG_DAMON */
#endif /* _DAMON_H */
diff --git a/mm/damon/core.c b/mm/damon/core.c
index 3cef74ba7f0f4..d2c8c088e7451 100644
--- a/mm/damon/core.c
+++ b/mm/damon/core.c
@@ -3266,70 +3266,6 @@ int damon_set_region_system_rams_default(struct damon_target *t,
return damon_set_regions(t, &addr_range, 1, min_region_sz);
}
-static int walk_system_ram(struct resource *res, void *arg)
-{
- struct resource *a = arg;
-
- if (resource_size(a) < resource_size(res)) {
- a->start = res->start;
- a->end = res->end;
- }
- return 0;
-}
-
-/*
- * Find biggest 'System RAM' resource and store its start and end address in
- * @start and @end, respectively. If no System RAM is found, returns false.
- */
-static bool damon_find_biggest_system_ram(unsigned long *start,
- unsigned long *end, unsigned long addr_unit)
-
-{
- struct resource res = {};
-
- walk_system_ram_res(0, -1, &res, walk_system_ram);
- *start = damon_res_to_core_addr(res.start, addr_unit);
- *end = damon_res_to_core_addr(res.end + 1, addr_unit);
- if (*end <= *start)
- return false;
- return true;
-}
-
-/**
- * damon_set_region_biggest_system_ram_default() - Set the region of the given
- * monitoring target as requested, or biggest 'System RAM'.
- * @t: The monitoring target to set the region.
- * @start: The pointer to the start address of the region.
- * @end: The pointer to the end address of the region.
- * @addr_unit: The address unit for the damon_ctx of @t.
- * @min_region_sz: Minimum region size.
- *
- * This function sets the region of @t as requested by @start and @end. If the
- * values of @start and @end are zero, however, this function finds the biggest
- * 'System RAM' resource and sets the region to cover the resource. In the
- * latter case, this function saves the start and end addresses of the resource
- * in @start and @end, respectively.
- *
- * Return: 0 on success, negative error code otherwise.
- */
-int damon_set_region_biggest_system_ram_default(struct damon_target *t,
- unsigned long *start, unsigned long *end,
- unsigned long addr_unit, unsigned long min_region_sz)
-{
- struct damon_addr_range addr_range;
-
- if (*start > *end)
- return -EINVAL;
-
- if (!*start && !*end &&
- !damon_find_biggest_system_ram(start, end, addr_unit))
- return -EINVAL;
-
- addr_range.start = *start;
- addr_range.end = *end;
- return damon_set_regions(t, &addr_range, 1, min_region_sz);
-}
-
/*
* damon_moving_sum() - Calculate an inferred moving sum value.
* @mvsum: Inferred sum of the last @len_window values.
--
2.47.3
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [RFC PATCH 5/7] mm/damon/stat: use damon_set_region_system_rams_default()
2026-04-15 1:20 [RFC PATCH 0/7] mm/damon/reclaim,lru_sort: monitor all system rams by default SeongJae Park
` (3 preceding siblings ...)
2026-04-15 1:20 ` [RFC PATCH 4/7] mm/damon/core: remove damon_set_region_biggest_system_ram_default() SeongJae Park
@ 2026-04-15 1:20 ` SeongJae Park
2026-04-15 1:20 ` [RFC PATCH 6/7] Docs/admin-guide/mm/damon/reclaim: update for entire memory monitoring SeongJae Park
2026-04-15 1:20 ` [RFC PATCH 7/7] Docs/admin-guide/mm/damon/lru_sort: " SeongJae Park
6 siblings, 0 replies; 16+ messages in thread
From: SeongJae Park @ 2026-04-15 1:20 UTC (permalink / raw)
Cc: SeongJae Park, Andrew Morton, damon, linux-kernel, linux-mm
damon_stat_set_moniotirng_region() is nearly a duplicate of the core
function, damon_set_region_system_rams_default(). Use the core
implementation.
Signed-off-by: SeongJae Park <sj@kernel.org>
---
mm/damon/stat.c | 53 +++----------------------------------------------
1 file changed, 3 insertions(+), 50 deletions(-)
diff --git a/mm/damon/stat.c b/mm/damon/stat.c
index 4901e9a7c3398..da2ebf90ef64e 100644
--- a/mm/damon/stat.c
+++ b/mm/damon/stat.c
@@ -154,59 +154,12 @@ static int damon_stat_damon_call_fn(void *data)
return 0;
}
-struct damon_stat_system_ram_range_walk_arg {
- bool walked;
- struct resource res;
-};
-
-static int damon_stat_system_ram_walk_fn(struct resource *res, void *arg)
-{
- struct damon_stat_system_ram_range_walk_arg *a = arg;
-
- if (!a->walked) {
- a->walked = true;
- a->res.start = res->start;
- }
- a->res.end = res->end;
- return 0;
-}
-
-static unsigned long damon_stat_res_to_core_addr(resource_size_t ra,
- unsigned long addr_unit)
-{
- /*
- * Use div_u64() for avoiding linking errors related with __udivdi3,
- * __aeabi_uldivmod, or similar problems. This should also improve the
- * performance optimization (read div_u64() comment for the detail).
- */
- if (sizeof(ra) == 8 && sizeof(addr_unit) == 4)
- return div_u64(ra, addr_unit);
- return ra / addr_unit;
-}
-
-static int damon_stat_set_monitoring_region(struct damon_target *t,
- unsigned long addr_unit, unsigned long min_region_sz)
-{
- struct damon_addr_range addr_range;
- struct damon_stat_system_ram_range_walk_arg arg = {};
-
- walk_system_ram_res(0, -1, &arg, damon_stat_system_ram_walk_fn);
- if (!arg.walked)
- return -EINVAL;
- addr_range.start = damon_stat_res_to_core_addr(
- arg.res.start, addr_unit);
- addr_range.end = damon_stat_res_to_core_addr(
- arg.res.end + 1, addr_unit);
- if (addr_range.end <= addr_range.start)
- return -EINVAL;
- return damon_set_regions(t, &addr_range, 1, min_region_sz);
-}
-
static struct damon_ctx *damon_stat_build_ctx(void)
{
struct damon_ctx *ctx;
struct damon_attrs attrs;
struct damon_target *target;
+ unsigned long start = 0, end = 0;
ctx = damon_new_ctx();
if (!ctx)
@@ -236,8 +189,8 @@ static struct damon_ctx *damon_stat_build_ctx(void)
if (!target)
goto free_out;
damon_add_target(ctx, target);
- if (damon_stat_set_monitoring_region(target, ctx->addr_unit,
- ctx->min_region_sz))
+ if (damon_set_region_system_rams_default(target, &start, &end,
+ ctx->addr_unit, ctx->min_region_sz))
goto free_out;
return ctx;
free_out:
--
2.47.3
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [RFC PATCH 6/7] Docs/admin-guide/mm/damon/reclaim: update for entire memory monitoring
2026-04-15 1:20 [RFC PATCH 0/7] mm/damon/reclaim,lru_sort: monitor all system rams by default SeongJae Park
` (4 preceding siblings ...)
2026-04-15 1:20 ` [RFC PATCH 5/7] mm/damon/stat: use damon_set_region_system_rams_default() SeongJae Park
@ 2026-04-15 1:20 ` SeongJae Park
2026-04-15 1:20 ` [RFC PATCH 7/7] Docs/admin-guide/mm/damon/lru_sort: " SeongJae Park
6 siblings, 0 replies; 16+ messages in thread
From: SeongJae Park @ 2026-04-15 1:20 UTC (permalink / raw)
Cc: SeongJae Park, Liam R. Howlett, Andrew Morton, David Hildenbrand,
Jonathan Corbet, Lorenzo Stoakes, Michal Hocko, Mike Rapoport,
Shuah Khan, Suren Baghdasaryan, Vlastimil Babka, damon, linux-doc,
linux-kernel, linux-mm
Update DAMON_RECLAIM usage document for the changed default monitoring
target region selection.
Signed-off-by: SeongJae Park <sj@kernel.org>
---
Documentation/admin-guide/mm/damon/reclaim.rst | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/Documentation/admin-guide/mm/damon/reclaim.rst b/Documentation/admin-guide/mm/damon/reclaim.rst
index b14a065586271..ec7e3e32b4ac6 100644
--- a/Documentation/admin-guide/mm/damon/reclaim.rst
+++ b/Documentation/admin-guide/mm/damon/reclaim.rst
@@ -240,7 +240,8 @@ Start of target memory region in physical address.
The start physical address of memory region that DAMON_RECLAIM will do work
against. That is, DAMON_RECLAIM will find cold memory regions in this region
-and reclaims. By default, biggest System RAM is used as the region.
+and reclaims. By default, the system's entire physical memory is used as the
+region.
monitor_region_end
------------------
@@ -249,7 +250,8 @@ End of target memory region in physical address.
The end physical address of memory region that DAMON_RECLAIM will do work
against. That is, DAMON_RECLAIM will find cold memory regions in this region
-and reclaims. By default, biggest System RAM is used as the region.
+and reclaims. By default, the system's entire physical memory is used as the
+region.
addr_unit
---------
--
2.47.3
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [RFC PATCH 7/7] Docs/admin-guide/mm/damon/lru_sort: update for entire memory monitoring
2026-04-15 1:20 [RFC PATCH 0/7] mm/damon/reclaim,lru_sort: monitor all system rams by default SeongJae Park
` (5 preceding siblings ...)
2026-04-15 1:20 ` [RFC PATCH 6/7] Docs/admin-guide/mm/damon/reclaim: update for entire memory monitoring SeongJae Park
@ 2026-04-15 1:20 ` SeongJae Park
2026-04-15 2:57 ` sashiko-bot
6 siblings, 1 reply; 16+ messages in thread
From: SeongJae Park @ 2026-04-15 1:20 UTC (permalink / raw)
Cc: SeongJae Park, Liam R. Howlett, Andrew Morton, David Hildenbrand,
Jonathan Corbet, Lorenzo Stoakes, Michal Hocko, Mike Rapoport,
Shuah Khan, Suren Baghdasaryan, Vlastimil Babka, damon, linux-doc,
linux-kernel, linux-mm
Update DAMON_LRU_SORT usage document for the changed default monitoring
target region selection.
Signed-off-by: SeongJae Park <sj@kernel.org>
---
Documentation/admin-guide/mm/damon/lru_sort.rst | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/Documentation/admin-guide/mm/damon/lru_sort.rst b/Documentation/admin-guide/mm/damon/lru_sort.rst
index 25e2f042a383f..796b0a028555d 100644
--- a/Documentation/admin-guide/mm/damon/lru_sort.rst
+++ b/Documentation/admin-guide/mm/damon/lru_sort.rst
@@ -246,7 +246,8 @@ monitor_region_start
Start of target memory region in physical address.
The start physical address of memory region that DAMON_LRU_SORT will do work
-against. By default, biggest System RAM is used as the region.
+against. By default, the system's entire phyiscal memory is used as the
+region.
monitor_region_end
------------------
@@ -254,7 +255,8 @@ monitor_region_end
End of target memory region in physical address.
The end physical address of memory region that DAMON_LRU_SORT will do work
-against. By default, biggest System RAM is used as the region.
+against. By default, the system's entire physical memory is used as the
+region.
addr_unit
---------
--
2.47.3
^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [RFC PATCH 1/7] mm/damon: introduce damon_set_region_system_rams_default()
2026-04-15 1:20 ` [RFC PATCH 1/7] mm/damon: introduce damon_set_region_system_rams_default() SeongJae Park
@ 2026-04-15 1:35 ` sashiko-bot
2026-04-15 3:36 ` SeongJae Park
0 siblings, 1 reply; 16+ messages in thread
From: sashiko-bot @ 2026-04-15 1:35 UTC (permalink / raw)
To: SeongJae Park; +Cc: damon
Sashiko AI review found 1 potential issue(s):
- [High] Creation of a single bounding box region for all System RAM severely degrades monitoring accuracy on sparse memory topologies by including non-RAM holes.
--
commit 8f8f43643f81a1ce41ad12d7eb7416faa6e23114
Author: SeongJae Park <sj@kernel.org>
mm/damon: introduce damon_set_region_system_rams_default()
This commit introduces a function to set the default target region to
cover all System RAM resources instead of just the largest one. It finds
the start of the first resource and the end of the last resource, and
sets this bounding box as the target region.
> diff --git a/mm/damon/core.c b/mm/damon/core.c
> index 12544c60531d3..91e14caa4f6a1 100644
> --- a/mm/damon/core.c
> +++ b/mm/damon/core.c
[ ... ]
> +static int damon_system_ram_walk_fn(struct resource *res, void *arg)
> +{
> + struct damon_system_ram_range_walk_arg *a = arg;
> +
> + if (!a->walked) {
> + a->walked = true;
> + a->res.start = res->start;
> + }
> + a->res.end = res->end;
> + return 0;
> +}
[ ... ]
> +int damon_set_region_system_rams_default(struct damon_target *t,
> + unsigned long *start, unsigned long *end,
> + unsigned long addr_unit, unsigned long min_region_sz)
> +{
> + struct damon_addr_range addr_range;
> +
> + if (*start > *end)
> + return -EINVAL;
> +
> + if (!*start && !*end &&
> + !damon_find_system_rams_range(start, end, addr_unit))
> + return -EINVAL;
> +
> + addr_range.start = *start;
> + addr_range.end = *end;
> + return damon_set_regions(t, &addr_range, 1, min_region_sz);
Could this create a single bounding box region that includes large non-RAM
holes on systems with sparse memory layouts?
Since damon_system_ram_walk_fn() only records the minimum start address and
maximum end address across all System RAM resources, creating a single
contiguous damon_addr_range from this bounding box includes any
intervening address gaps.
On multi-socket NUMA systems or devices with large MMIO holes, this
could result in massive address ranges not backed by actual RAM. Because
DAMON uniformly samples random addresses within each region, sampling
might predominantly hit these unbacked memory holes.
Would this dilute the access sampling and cause actively used memory to be
incorrectly classified as cold? Should the code gather discrete System RAM
ranges into an array of struct damon_addr_range and pass them to
damon_set_regions() with nr_ranges > 1 to exclude holes?
--
Sashiko AI review · https://sashiko.dev/#/patchset/20260415012048.76508-1-sj@kernel.org?part=1
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [RFC PATCH 2/7] mm/damon/reclaim: cover all system rams
2026-04-15 1:20 ` [RFC PATCH 2/7] mm/damon/reclaim: cover all system rams SeongJae Park
@ 2026-04-15 1:58 ` sashiko-bot
2026-04-15 3:47 ` SeongJae Park
0 siblings, 1 reply; 16+ messages in thread
From: sashiko-bot @ 2026-04-15 1:58 UTC (permalink / raw)
To: SeongJae Park; +Cc: damon
Sashiko AI review found 3 potential issue(s):
- [High] Unbounded iteration over physical memory holes causes severe scheduling latency and soft lockups.
- [High] Inclusion of physical holes permanently exhausts DAMOS quotas and starves actual memory reclaim.
- [High] Bounding box over sparse memory dilutes sampling accuracy and traps RAM in cold regions.
--
commit 1ebe17b8690373c96bc931bcacccad9cbc735e44
Author: SeongJae Park <sj@kernel.org>
mm/damon/reclaim: cover all system rams
This commit changes the default DAMON_RECLAIM monitoring region from the
biggest System RAM resource to a single bounding box that spans all System
RAM resources on the system.
> diff --git a/mm/damon/reclaim.c b/mm/damon/reclaim.c
> index 86da147786583..2a2343d751f3e 100644
> --- a/mm/damon/reclaim.c
> +++ b/mm/damon/reclaim.c
[ ... ]
> @@ -242,11 +244,9 @@ static int damon_reclaim_apply_parameters(void)
> damos_add_filter(scheme, filter);
> }
>
> - err = damon_set_region_biggest_system_ram_default(param_target,
> - &monitor_region_start,
> - &monitor_region_end,
> - param_ctx->addr_unit,
> - param_ctx->min_region_sz);
> + err = damon_set_region_system_rams_default(param_target,
> + &monitor_region_start, &monitor_region_end,
> + param_ctx->addr_unit, param_ctx->min_region_sz);
When using a single bounding box that covers all System RAM, does this
introduce large physical address holes (like PCIe MMIO gaps) into the
monitored region? On sparse NUMA topologies, these holes can be terabytes
in size.
If operations like DAMOS_PAGEOUT iterate linearly over the region
page-by-page, will they process these massive holes? For physical holes,
damon_get_folio() will return NULL, and the loop might continue without
yielding:
damon_pa_pageout() {
...
while (addr < end) {
...
folio = damon_get_folio(PHYS_PFN(addr));
if (!folio) {
addr += PAGE_SIZE;
continue;
}
...
}
}
Since cond_resched() is often placed outside or after such loops, could this
continuous iteration over unbacked holes trigger soft lockups and severe
scheduling latency?
How does the inclusion of massive physical holes interact with DAMOS quotas?
In damos_apply_scheme(), the scheme's charged size is incremented by the
evaluated region size (quota->charged_sz += sz). If regions span terabytes of
unbacked holes, evaluating them will quickly exhaust the quota (e.g., 128
MiB/sec for DAMON_RECLAIM) without reclaiming any actual pages.
Could this starve actual memory reclaim and inflate the sz_tried statistics?
Does the bounding box approach also affect sampling accuracy?
DAMON samples a random address within a region using
damon_rand(r->ar.start, r->ar.end). If the region is mostly composed of
physical holes, the sample will frequently land in unbacked space and return
zero accesses.
Since the region will continuously appear cold, could
damon_merge_two_regions() merge it with adjacent valid RAM regions? This
might trap valid RAM inside massive cold regions, diluting the sampling
probability for actual memory.
--
Sashiko AI review · https://sashiko.dev/#/patchset/20260415012048.76508-1-sj@kernel.org?part=2
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [RFC PATCH 3/7] mm/damon/lru_sort: cover all system rams
2026-04-15 1:20 ` [RFC PATCH 3/7] mm/damon/lru_sort: " SeongJae Park
@ 2026-04-15 2:36 ` sashiko-bot
2026-04-15 3:52 ` SeongJae Park
0 siblings, 1 reply; 16+ messages in thread
From: sashiko-bot @ 2026-04-15 2:36 UTC (permalink / raw)
To: SeongJae Park; +Cc: damon
Sashiko AI review found 3 potential issue(s):
- [High] Monitoring the bounding box of all System RAM resources encompasses massive physical address holes, fundamentally breaking DAMON's sampling-based access check on sparse memory systems.
- [High] Applying schemes to the newly-enlarged bounding box can cause a soft lockup panic due to unbounded page iteration without yielding.
- [Low] Misspelled 'phyiscal' in module parameter documentation.
--
commit 11d2e4765abf8999777a27c53c055f1c43ca53bf
Author: SeongJae Park <sj@kernel.org>
mm/damon/lru_sort: cover all system rams
This commit updates DAMON_LRU_SORT to cover all system RAM instead of
just the biggest System RAM resource. The default monitor region start
and end parameter descriptions and initializations are modified to
reflect the entire physical memory space.
> diff --git a/mm/damon/lru_sort.c b/mm/damon/lru_sort.c
> index 554559d729760..7f8c26890ffb2 100644
> --- a/mm/damon/lru_sort.c
> +++ b/mm/damon/lru_sort.c
> @@ -140,7 +140,8 @@ DEFINE_DAMON_MODULES_MON_ATTRS_PARAMS(damon_lru_sort_mon_attrs);
> * Start of the target memory region in physical address.
> *
> * The start physical address of memory region that DAMON_LRU_SORT will do work
> - * against. By default, biggest System RAM is used as the region.
> + * against. By default, the system's entire phyiscal memory is used as the
This isn't a bug, but there is a typo (phyiscal) here.
> + * region.
> */
> static unsigned long monitor_region_start __read_mostly;
> module_param(monitor_region_start, ulong, 0600);
> @@ -149,7 +150,8 @@ module_param(monitor_region_start, ulong, 0600);
> * End of the target memory region in physical address.
> *
> * The end physical address of memory region that DAMON_LRU_SORT will do work
> - * against. By default, biggest System RAM is used as the region.
> + * against. By default, the system's entire phyiscal memory is used as the
There is another typo (phyiscal) here as well.
> + * region.
> */
> static unsigned long monitor_region_end __read_mostly;
> module_param(monitor_region_end, ulong, 0600);
[ ... ]
> @@ -336,7 +338,7 @@ static int damon_lru_sort_apply_parameters(void)
> if (err)
> goto out;
>
> - err = damon_set_region_biggest_system_ram_default(param_target,
> + err = damon_set_region_system_rams_default(param_target,
> &monitor_region_start,
> &monitor_region_end,
> param_ctx->addr_unit,
Will monitoring the bounding box of all System RAM resources encompass massive
physical address holes on sparse memory systems?
Because DAMON estimates access frequency by sampling uniformly at random from
the region:
r->sampling_addr = damon_rand(r->ar.start, r->ar.end);
If the bounding box spans across terabytes of address gaps between NUMA nodes,
would the overwhelming majority of samples hit unpopulated holes?
Since sampling a hole returns no folio, does this artificially dilute the
apparent access frequency of valid hot RAM within the region and misidentify
it as cold?
Can this also cause soft lockups during scheme application?
When applying a scheme, the region is iterated page-by-page. For unpopulated
gaps, the folio check is skipped and the address is incremented:
while (addr < damon_pa_phys_addr(r->ar.end, addr_unit)) {
folio = damon_get_folio(PHYS_PFN(addr));
if (damon_pa_invalid_damos_folio(folio, s)) {
addr += PAGE_SIZE;
continue;
}
...
Since there is no cond_resched() inside this tight loop, and because valid
folios are not found (meaning quota variables don't increment to abort early),
could iterating over terabytes of unpopulated gaps trigger the soft lockup
watchdog on PREEMPT_NONE or PREEMPT_VOLUNTARY kernels?
--
Sashiko AI review · https://sashiko.dev/#/patchset/20260415012048.76508-1-sj@kernel.org?part=3
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [RFC PATCH 7/7] Docs/admin-guide/mm/damon/lru_sort: update for entire memory monitoring
2026-04-15 1:20 ` [RFC PATCH 7/7] Docs/admin-guide/mm/damon/lru_sort: " SeongJae Park
@ 2026-04-15 2:57 ` sashiko-bot
2026-04-15 3:53 ` SeongJae Park
0 siblings, 1 reply; 16+ messages in thread
From: sashiko-bot @ 2026-04-15 2:57 UTC (permalink / raw)
To: SeongJae Park; +Cc: damon
Sashiko AI review found 1 potential issue(s):
- [Low] Typo in documentation update for `monitor_region_start`: 'phyiscal' instead of 'physical'.
--
commit 86d0c933a3a562746e2eb8114113949d40da7811
Author: SeongJae Park <sj@kernel.org>
Docs/admin-guide/mm/damon/lru_sort: update for entire memory monitoring
Update DAMON_LRU_SORT usage document for the changed default monitoring
target region selection.
> diff --git a/Documentation/admin-guide/mm/damon/lru_sort.rst b/Documentation/admin-guide/mm/damon/lru_sort.rst
> index 14cc6b2db8973..012accf375b33 100644
> --- a/Documentation/admin-guide/mm/damon/lru_sort.rst
> +++ b/Documentation/admin-guide/mm/damon/lru_sort.rst
> @@ -246,7 +246,8 @@ monitor_region_start
> Start of target memory region in physical address.
>
> The start physical address of memory region that DAMON_LRU_SORT will do work
> -against. By default, biggest System RAM is used as the region.
> +against. By default, the system's entire phyiscal memory is used as the
This isn't a bug, but there is a typo (phyiscal instead of physical) here.
> +region.
--
Sashiko AI review · https://sashiko.dev/#/patchset/20260415012048.76508-1-sj@kernel.org?part=7
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [RFC PATCH 1/7] mm/damon: introduce damon_set_region_system_rams_default()
2026-04-15 1:35 ` sashiko-bot
@ 2026-04-15 3:36 ` SeongJae Park
0 siblings, 0 replies; 16+ messages in thread
From: SeongJae Park @ 2026-04-15 3:36 UTC (permalink / raw)
To: sashiko-bot; +Cc: SeongJae Park, damon
TL; DR: Sashiko found no real issue of this patch.
On Wed, 15 Apr 2026 01:35:52 +0000 sashiko-bot@kernel.org wrote:
> Sashiko AI review found 1 potential issue(s):
> - [High] Creation of a single bounding box region for all System RAM severely degrades monitoring accuracy on sparse memory topologies by including non-RAM holes.
> --
>
> commit 8f8f43643f81a1ce41ad12d7eb7416faa6e23114
> Author: SeongJae Park <sj@kernel.org>
>
> mm/damon: introduce damon_set_region_system_rams_default()
>
> This commit introduces a function to set the default target region to
> cover all System RAM resources instead of just the largest one. It finds
> the start of the first resource and the end of the last resource, and
> sets this bounding box as the target region.
>
> > diff --git a/mm/damon/core.c b/mm/damon/core.c
> > index 12544c60531d3..91e14caa4f6a1 100644
> > --- a/mm/damon/core.c
> > +++ b/mm/damon/core.c
> [ ... ]
> > +static int damon_system_ram_walk_fn(struct resource *res, void *arg)
> > +{
> > + struct damon_system_ram_range_walk_arg *a = arg;
> > +
> > + if (!a->walked) {
> > + a->walked = true;
> > + a->res.start = res->start;
> > + }
> > + a->res.end = res->end;
> > + return 0;
> > +}
> [ ... ]
> > +int damon_set_region_system_rams_default(struct damon_target *t,
> > + unsigned long *start, unsigned long *end,
> > + unsigned long addr_unit, unsigned long min_region_sz)
> > +{
> > + struct damon_addr_range addr_range;
> > +
> > + if (*start > *end)
> > + return -EINVAL;
> > +
> > + if (!*start && !*end &&
> > + !damon_find_system_rams_range(start, end, addr_unit))
> > + return -EINVAL;
> > +
> > + addr_range.start = *start;
> > + addr_range.end = *end;
> > + return damon_set_regions(t, &addr_range, 1, min_region_sz);
>
> Could this create a single bounding box region that includes large non-RAM
> holes on systems with sparse memory layouts?
>
> Since damon_system_ram_walk_fn() only records the minimum start address and
> maximum end address across all System RAM resources, creating a single
> contiguous damon_addr_range from this bounding box includes any
> intervening address gaps.
>
> On multi-socket NUMA systems or devices with large MMIO holes, this
> could result in massive address ranges not backed by actual RAM. Because
> DAMON uniformly samples random addresses within each region, sampling
> might predominantly hit these unbacked memory holes.
>
> Would this dilute the access sampling and cause actively used memory to be
> incorrectly classified as cold? Should the code gather discrete System RAM
> ranges into an array of struct damon_addr_range and pass them to
> damon_set_regions() with nr_ranges > 1 to exclude holes?
This is true. But it shouldn't cause real problems because of the adaptive
regions adjustment mechniasm of DAMON, as also mentioned on the cover letter.
>
> --
> Sashiko AI review · https://sashiko.dev/#/patchset/20260415012048.76508-1-sj@kernel.org?part=1
Thanks,
SJ
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [RFC PATCH 2/7] mm/damon/reclaim: cover all system rams
2026-04-15 1:58 ` sashiko-bot
@ 2026-04-15 3:47 ` SeongJae Park
0 siblings, 0 replies; 16+ messages in thread
From: SeongJae Park @ 2026-04-15 3:47 UTC (permalink / raw)
To: sashiko-bot; +Cc: SeongJae Park, damon
On Wed, 15 Apr 2026 01:58:29 +0000 sashiko-bot@kernel.org wrote:
> Sashiko AI review found 3 potential issue(s):
> - [High] Unbounded iteration over physical memory holes causes severe scheduling latency and soft lockups.
> - [High] Inclusion of physical holes permanently exhausts DAMOS quotas and starves actual memory reclaim.
> - [High] Bounding box over sparse memory dilutes sampling accuracy and traps RAM in cold regions.
I don't think none of these are real issues for this patch. See below for more
detailed answers.
> --
>
> commit 1ebe17b8690373c96bc931bcacccad9cbc735e44
> Author: SeongJae Park <sj@kernel.org>
>
> mm/damon/reclaim: cover all system rams
>
> This commit changes the default DAMON_RECLAIM monitoring region from the
> biggest System RAM resource to a single bounding box that spans all System
> RAM resources on the system.
>
> > diff --git a/mm/damon/reclaim.c b/mm/damon/reclaim.c
> > index 86da147786583..2a2343d751f3e 100644
> > --- a/mm/damon/reclaim.c
> > +++ b/mm/damon/reclaim.c
> [ ... ]
> > @@ -242,11 +244,9 @@ static int damon_reclaim_apply_parameters(void)
> > damos_add_filter(scheme, filter);
> > }
> >
> > - err = damon_set_region_biggest_system_ram_default(param_target,
> > - &monitor_region_start,
> > - &monitor_region_end,
> > - param_ctx->addr_unit,
> > - param_ctx->min_region_sz);
> > + err = damon_set_region_system_rams_default(param_target,
> > + &monitor_region_start, &monitor_region_end,
> > + param_ctx->addr_unit, param_ctx->min_region_sz);
>
> When using a single bounding box that covers all System RAM, does this
> introduce large physical address holes (like PCIe MMIO gaps) into the
> monitored region? On sparse NUMA topologies, these holes can be terabytes
> in size.
>
> If operations like DAMOS_PAGEOUT iterate linearly over the region
> page-by-page, will they process these massive holes? For physical holes,
> damon_get_folio() will return NULL, and the loop might continue without
> yielding:
>
> damon_pa_pageout() {
> ...
> while (addr < end) {
> ...
> folio = damon_get_folio(PHYS_PFN(addr));
> if (!folio) {
> addr += PAGE_SIZE;
> continue;
> }
> ...
> }
> }
>
> Since cond_resched() is often placed outside or after such loops, could this
> continuous iteration over unbacked holes trigger soft lockups and severe
> scheduling latency?
Theoretically that's possible. But I believe damon_get_folio() should be
lighweight enough to not cause such problems in common cases. Also, we allow
users to manually set the monitoring regions for special cases.
So unless this cause such real issues in common cases, I think this patch is
good to go as is.
>
> How does the inclusion of massive physical holes interact with DAMOS quotas?
>
> In damos_apply_scheme(), the scheme's charged size is incremented by the
> evaluated region size (quota->charged_sz += sz). If regions span terabytes of
> unbacked holes, evaluating them will quickly exhaust the quota (e.g., 128
> MiB/sec for DAMON_RECLAIM) without reclaiming any actual pages.
>
> Could this starve actual memory reclaim and inflate the sz_tried statistics?
Yes, this could exhaust the quota. But, again, users can manually set the
regions if this becomes a real problem on their setup. We are also working on
the action failed regions quota charge ratio [1], which could also help the
corner case.
>
> Does the bounding box approach also affect sampling accuracy?
>
> DAMON samples a random address within a region using
> damon_rand(r->ar.start, r->ar.end). If the region is mostly composed of
> physical holes, the sample will frequently land in unbacked space and return
> zero accesses.
>
> Since the region will continuously appear cold, could
> damon_merge_two_regions() merge it with adjacent valid RAM regions? This
> might trap valid RAM inside massive cold regions, diluting the sampling
> probability for actual memory.
Adaptive regions adjustment should avoid things going on such extreme case.
>
> --
> Sashiko AI review · https://sashiko.dev/#/patchset/20260415012048.76508-1-sj@kernel.org?part=2
[1] https://lore.kernel.org/20260412161957.82835-1-sj@kernel.org
Thanks,
SJ
[...]
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [RFC PATCH 3/7] mm/damon/lru_sort: cover all system rams
2026-04-15 2:36 ` sashiko-bot
@ 2026-04-15 3:52 ` SeongJae Park
0 siblings, 0 replies; 16+ messages in thread
From: SeongJae Park @ 2026-04-15 3:52 UTC (permalink / raw)
To: sashiko-bot; +Cc: SeongJae Park, damon
On Wed, 15 Apr 2026 02:36:28 +0000 sashiko-bot@kernel.org wrote:
> Sashiko AI review found 3 potential issue(s):
> - [High] Monitoring the bounding box of all System RAM resources encompasses massive physical address holes, fundamentally breaking DAMON's sampling-based access check on sparse memory systems.
> - [High] Applying schemes to the newly-enlarged bounding box can cause a soft lockup panic due to unbounded page iteration without yielding.
> - [Low] Misspelled 'phyiscal' in module parameter documentation.
Other than the typo, no real issues in my opinion. Read below for more
detailed answers.
> --
>
> commit 11d2e4765abf8999777a27c53c055f1c43ca53bf
> Author: SeongJae Park <sj@kernel.org>
>
> mm/damon/lru_sort: cover all system rams
>
> This commit updates DAMON_LRU_SORT to cover all system RAM instead of
> just the biggest System RAM resource. The default monitor region start
> and end parameter descriptions and initializations are modified to
> reflect the entire physical memory space.
>
> > diff --git a/mm/damon/lru_sort.c b/mm/damon/lru_sort.c
> > index 554559d729760..7f8c26890ffb2 100644
> > --- a/mm/damon/lru_sort.c
> > +++ b/mm/damon/lru_sort.c
> > @@ -140,7 +140,8 @@ DEFINE_DAMON_MODULES_MON_ATTRS_PARAMS(damon_lru_sort_mon_attrs);
> > * Start of the target memory region in physical address.
> > *
> > * The start physical address of memory region that DAMON_LRU_SORT will do work
> > - * against. By default, biggest System RAM is used as the region.
> > + * against. By default, the system's entire phyiscal memory is used as the
>
> This isn't a bug, but there is a typo (phyiscal) here.
Nice catch, I will fix this in the next revision.
>
> > + * region.
> > */
> > static unsigned long monitor_region_start __read_mostly;
> > module_param(monitor_region_start, ulong, 0600);
> > @@ -149,7 +150,8 @@ module_param(monitor_region_start, ulong, 0600);
> > * End of the target memory region in physical address.
> > *
> > * The end physical address of memory region that DAMON_LRU_SORT will do work
> > - * against. By default, biggest System RAM is used as the region.
> > + * against. By default, the system's entire phyiscal memory is used as the
>
> There is another typo (phyiscal) here as well.
I will fix this in the next revision.
>
> > + * region.
> > */
> > static unsigned long monitor_region_end __read_mostly;
> > module_param(monitor_region_end, ulong, 0600);
>
> [ ... ]
>
> > @@ -336,7 +338,7 @@ static int damon_lru_sort_apply_parameters(void)
> > if (err)
> > goto out;
> >
> > - err = damon_set_region_biggest_system_ram_default(param_target,
> > + err = damon_set_region_system_rams_default(param_target,
> > &monitor_region_start,
> > &monitor_region_end,
> > param_ctx->addr_unit,
>
> Will monitoring the bounding box of all System RAM resources encompass massive
> physical address holes on sparse memory systems?
>
> Because DAMON estimates access frequency by sampling uniformly at random from
> the region:
>
> r->sampling_addr = damon_rand(r->ar.start, r->ar.end);
>
> If the bounding box spans across terabytes of address gaps between NUMA nodes,
> would the overwhelming majority of samples hit unpopulated holes?
>
> Since sampling a hole returns no folio, does this artificially dilute the
> apparent access frequency of valid hot RAM within the region and misidentify
> it as cold?
Adaptive regions adjustment should avoid this issue.
>
> Can this also cause soft lockups during scheme application?
>
> When applying a scheme, the region is iterated page-by-page. For unpopulated
> gaps, the folio check is skipped and the address is incremented:
>
> while (addr < damon_pa_phys_addr(r->ar.end, addr_unit)) {
> folio = damon_get_folio(PHYS_PFN(addr));
> if (damon_pa_invalid_damos_folio(folio, s)) {
> addr += PAGE_SIZE;
> continue;
> }
> ...
>
> Since there is no cond_resched() inside this tight loop, and because valid
> folios are not found (meaning quota variables don't increment to abort early),
> could iterating over terabytes of unpopulated gaps trigger the soft lockup
> watchdog on PREEMPT_NONE or PREEMPT_VOLUNTARY kernels?
As I replied to the previous patch review, though it is theoretically possible,
I don't think such cases are common. Also if it is really such a corner cases,
users can manually set the monitoring regions.
>
> --
> Sashiko AI review · https://sashiko.dev/#/patchset/20260415012048.76508-1-sj@kernel.org?part=3
Thanks,
SJ
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [RFC PATCH 7/7] Docs/admin-guide/mm/damon/lru_sort: update for entire memory monitoring
2026-04-15 2:57 ` sashiko-bot
@ 2026-04-15 3:53 ` SeongJae Park
0 siblings, 0 replies; 16+ messages in thread
From: SeongJae Park @ 2026-04-15 3:53 UTC (permalink / raw)
To: sashiko-bot; +Cc: SeongJae Park, damon
On Wed, 15 Apr 2026 02:57:40 +0000 sashiko-bot@kernel.org wrote:
> Sashiko AI review found 1 potential issue(s):
> - [Low] Typo in documentation update for `monitor_region_start`: 'phyiscal' instead of 'physical'.
Good eye. I will fix this in the next revision.
Thanks,
SJ
[...]
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2026-04-15 3:53 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-15 1:20 [RFC PATCH 0/7] mm/damon/reclaim,lru_sort: monitor all system rams by default SeongJae Park
2026-04-15 1:20 ` [RFC PATCH 1/7] mm/damon: introduce damon_set_region_system_rams_default() SeongJae Park
2026-04-15 1:35 ` sashiko-bot
2026-04-15 3:36 ` SeongJae Park
2026-04-15 1:20 ` [RFC PATCH 2/7] mm/damon/reclaim: cover all system rams SeongJae Park
2026-04-15 1:58 ` sashiko-bot
2026-04-15 3:47 ` SeongJae Park
2026-04-15 1:20 ` [RFC PATCH 3/7] mm/damon/lru_sort: " SeongJae Park
2026-04-15 2:36 ` sashiko-bot
2026-04-15 3:52 ` SeongJae Park
2026-04-15 1:20 ` [RFC PATCH 4/7] mm/damon/core: remove damon_set_region_biggest_system_ram_default() SeongJae Park
2026-04-15 1:20 ` [RFC PATCH 5/7] mm/damon/stat: use damon_set_region_system_rams_default() SeongJae Park
2026-04-15 1:20 ` [RFC PATCH 6/7] Docs/admin-guide/mm/damon/reclaim: update for entire memory monitoring SeongJae Park
2026-04-15 1:20 ` [RFC PATCH 7/7] Docs/admin-guide/mm/damon/lru_sort: " SeongJae Park
2026-04-15 2:57 ` sashiko-bot
2026-04-15 3:53 ` SeongJae Park
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox