* [RFC PATCH mm-next v2 00/12] mm/damon: support ARM32 with LPAE
@ 2025-08-20 8:06 Quanmin Yan
2025-08-20 8:06 ` [RFC PATCH mm-next v2 01/12] mm/damon/core: add damon_ctx->addr_unit Quanmin Yan
` (12 more replies)
0 siblings, 13 replies; 18+ messages in thread
From: Quanmin Yan @ 2025-08-20 8:06 UTC (permalink / raw)
To: sj
Cc: akpm, damon, linux-kernel, linux-mm, yanquanmin1, wangkefeng.wang,
zuoze1
Previously, DAMON's physical address space monitoring only supported
memory ranges below 4GB on LPAE-enabled systems. This was due to
the use of 'unsigned long' in 'struct damon_addr_range', which is
32-bit on ARM32 even with LPAE enabled[1].
To add DAMON support for ARM32 with LPAE enabled, a new core layer
parameter called 'addr_unit' was introduced[2]. Operations set layer
can translate a core layer address to the real address by multiplying
the parameter value to the core layer address. Support of the parameter
is up to each operations layer implementation, though. For example,
operations set implementations for virtual address space can simply
ignore the parameter. Add the support on paddr, which is the DAMON
operations set implementation for the physical address space, as we have
a clear use case for that.
[1]https://lore.kernel.org/all/20250408075553.959388-1-zuoze1@huawei.com/
[2]https://lore.kernel.org/all/20250416042551.158131-1-sj@kernel.org/
Changes in v2:
- set DAMOS_PAGEOUT, DAMOS_LRU_[DE]PRIO, DAMOS_MIGRATE_{HOT,COLD} and
DAMOS_STAT stat in core address unit.
- pass ctx->min_region value to replace the original synchronization.
- drop the DAMOS stats type changes, keep them as 'unsigned long' type.
- separate add addr_unit support for DAMON_RECLAIM and LRU_SORT from
this patch series.
Quanmin Yan (2):
mm/damon: add damon_ctx->min_region
mm/damon/core: prevent unnecessary overflow in
damos_set_effective_quota()
SeongJae Park (10):
mm/damon/core: add damon_ctx->addr_unit
mm/damon/paddr: support addr_unit for access monitoring
mm/damon/paddr: support addr_unit for DAMOS_PAGEOUT
mm/damon/paddr: support addr_unit for DAMOS_LRU_[DE]PRIO
mm/damon/paddr: support addr_unit for MIGRATE_{HOT,COLD}
mm/damon/paddr: support addr_unit for DAMOS_STAT
mm/damon/sysfs: implement addr_unit file under context dir
Docs/mm/damon/design: document 'address unit' parameter
Docs/admin-guide/mm/damon/usage: document addr_unit file
Docs/ABI/damon: document addr_unit file
.../ABI/testing/sysfs-kernel-mm-damon | 7 ++
Documentation/admin-guide/mm/damon/usage.rst | 11 +-
Documentation/mm/damon/design.rst | 16 ++-
include/linux/damon.h | 7 +-
mm/damon/core.c | 75 +++++++------
mm/damon/paddr.c | 106 +++++++++++-------
mm/damon/sysfs.c | 41 ++++++-
mm/damon/tests/core-kunit.h | 16 +--
mm/damon/tests/vaddr-kunit.h | 2 +-
mm/damon/vaddr.c | 2 +-
10 files changed, 188 insertions(+), 95 deletions(-)
--
2.43.0
^ permalink raw reply [flat|nested] 18+ messages in thread
* [RFC PATCH mm-next v2 01/12] mm/damon/core: add damon_ctx->addr_unit
2025-08-20 8:06 [RFC PATCH mm-next v2 00/12] mm/damon: support ARM32 with LPAE Quanmin Yan
@ 2025-08-20 8:06 ` Quanmin Yan
2025-08-20 8:06 ` [RFC PATCH mm-next v2 02/12] mm/damon/paddr: support addr_unit for access monitoring Quanmin Yan
` (11 subsequent siblings)
12 siblings, 0 replies; 18+ messages in thread
From: Quanmin Yan @ 2025-08-20 8:06 UTC (permalink / raw)
To: sj
Cc: akpm, damon, linux-kernel, linux-mm, yanquanmin1, wangkefeng.wang,
zuoze1
From: SeongJae Park <sj@kernel.org>
In some cases, some of the real address that handled by the underlying
operations set cannot be handled by DAMON since it uses only 'unsinged
long' as the address type. Using DAMON for physical address space
monitoring of 32 bit ARM devices with large physical address extension
(LPAE) is one example[1].
Add a parameter name 'addr_unit' to core layer to help such cases.
DAMON core API callers can set it as the scale factor that will be used
by the operations set for translating the core layer's addresses to the
real address by multiplying the parameter value to the core layer
address. Support of the parameter is up to each operations set layer.
The support from the physical address space operations set (paddr) will
be added with following commits.
[1] https://lore.kernel.org/20250408075553.959388-1-zuoze1@huawei.com
Signed-off-by: SeongJae Park <sj@kernel.org>
Signed-off-by: Quanmin Yan <yanquanmin1@huawei.com>
---
include/linux/damon.h | 3 ++-
mm/damon/core.c | 3 +++
2 files changed, 5 insertions(+), 1 deletion(-)
diff --git a/include/linux/damon.h b/include/linux/damon.h
index d01bfee80bd6..6fa52f7495d9 100644
--- a/include/linux/damon.h
+++ b/include/linux/damon.h
@@ -746,7 +746,7 @@ struct damon_attrs {
* Accesses to other fields must be protected by themselves.
*
* @ops: Set of monitoring operations for given use cases.
- *
+ * @addr_unit: Scale factor for core to ops address conversion.
* @adaptive_targets: Head of monitoring targets (&damon_target) list.
* @schemes: Head of schemes (&damos) list.
*/
@@ -788,6 +788,7 @@ struct damon_ctx {
struct mutex kdamond_lock;
struct damon_operations ops;
+ unsigned long addr_unit;
struct list_head adaptive_targets;
struct list_head schemes;
diff --git a/mm/damon/core.c b/mm/damon/core.c
index cb41fddca78c..8f8aa84953ac 100644
--- a/mm/damon/core.c
+++ b/mm/damon/core.c
@@ -544,6 +544,8 @@ struct damon_ctx *damon_new_ctx(void)
ctx->attrs.min_nr_regions = 10;
ctx->attrs.max_nr_regions = 1000;
+ ctx->addr_unit = 1;
+
INIT_LIST_HEAD(&ctx->adaptive_targets);
INIT_LIST_HEAD(&ctx->schemes);
@@ -1245,6 +1247,7 @@ int damon_commit_ctx(struct damon_ctx *dst, struct damon_ctx *src)
return err;
}
dst->ops = src->ops;
+ dst->addr_unit = src->addr_unit;
return 0;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [RFC PATCH mm-next v2 02/12] mm/damon/paddr: support addr_unit for access monitoring
2025-08-20 8:06 [RFC PATCH mm-next v2 00/12] mm/damon: support ARM32 with LPAE Quanmin Yan
2025-08-20 8:06 ` [RFC PATCH mm-next v2 01/12] mm/damon/core: add damon_ctx->addr_unit Quanmin Yan
@ 2025-08-20 8:06 ` Quanmin Yan
2025-08-20 8:06 ` [RFC PATCH mm-next v2 03/12] mm/damon/paddr: support addr_unit for DAMOS_PAGEOUT Quanmin Yan
` (10 subsequent siblings)
12 siblings, 0 replies; 18+ messages in thread
From: Quanmin Yan @ 2025-08-20 8:06 UTC (permalink / raw)
To: sj
Cc: akpm, damon, linux-kernel, linux-mm, yanquanmin1, wangkefeng.wang,
zuoze1
From: SeongJae Park <sj@kernel.org>
Add support of addr_unit paramer for access monitoing operations of
paddr.
Signed-off-by: SeongJae Park <sj@kernel.org>
Signed-off-by: Quanmin Yan <yanquanmin1@huawei.com>
---
mm/damon/paddr.c | 32 +++++++++++++++++++++-----------
1 file changed, 21 insertions(+), 11 deletions(-)
diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c
index 0b67d9321460..d497373c2bd2 100644
--- a/mm/damon/paddr.c
+++ b/mm/damon/paddr.c
@@ -18,7 +18,13 @@
#include "../internal.h"
#include "ops-common.h"
-static void damon_pa_mkold(unsigned long paddr)
+static phys_addr_t damon_pa_phys_addr(
+ unsigned long addr, unsigned long addr_unit)
+{
+ return (phys_addr_t)addr * addr_unit;
+}
+
+static void damon_pa_mkold(phys_addr_t paddr)
{
struct folio *folio = damon_get_folio(PHYS_PFN(paddr));
@@ -29,11 +35,12 @@ static void damon_pa_mkold(unsigned long paddr)
folio_put(folio);
}
-static void __damon_pa_prepare_access_check(struct damon_region *r)
+static void __damon_pa_prepare_access_check(struct damon_region *r,
+ unsigned long addr_unit)
{
r->sampling_addr = damon_rand(r->ar.start, r->ar.end);
- damon_pa_mkold(r->sampling_addr);
+ damon_pa_mkold(damon_pa_phys_addr(r->sampling_addr, addr_unit));
}
static void damon_pa_prepare_access_checks(struct damon_ctx *ctx)
@@ -43,11 +50,11 @@ static void damon_pa_prepare_access_checks(struct damon_ctx *ctx)
damon_for_each_target(t, ctx) {
damon_for_each_region(r, t)
- __damon_pa_prepare_access_check(r);
+ __damon_pa_prepare_access_check(r, ctx->addr_unit);
}
}
-static bool damon_pa_young(unsigned long paddr, unsigned long *folio_sz)
+static bool damon_pa_young(phys_addr_t paddr, unsigned long *folio_sz)
{
struct folio *folio = damon_get_folio(PHYS_PFN(paddr));
bool accessed;
@@ -62,23 +69,25 @@ static bool damon_pa_young(unsigned long paddr, unsigned long *folio_sz)
}
static void __damon_pa_check_access(struct damon_region *r,
- struct damon_attrs *attrs)
+ struct damon_attrs *attrs, unsigned long addr_unit)
{
- static unsigned long last_addr;
+ static phys_addr_t last_addr;
static unsigned long last_folio_sz = PAGE_SIZE;
static bool last_accessed;
+ phys_addr_t sampling_addr = damon_pa_phys_addr(
+ r->sampling_addr, addr_unit);
/* If the region is in the last checked page, reuse the result */
if (ALIGN_DOWN(last_addr, last_folio_sz) ==
- ALIGN_DOWN(r->sampling_addr, last_folio_sz)) {
+ ALIGN_DOWN(sampling_addr, last_folio_sz)) {
damon_update_region_access_rate(r, last_accessed, attrs);
return;
}
- last_accessed = damon_pa_young(r->sampling_addr, &last_folio_sz);
+ last_accessed = damon_pa_young(sampling_addr, &last_folio_sz);
damon_update_region_access_rate(r, last_accessed, attrs);
- last_addr = r->sampling_addr;
+ last_addr = sampling_addr;
}
static unsigned int damon_pa_check_accesses(struct damon_ctx *ctx)
@@ -89,7 +98,8 @@ static unsigned int damon_pa_check_accesses(struct damon_ctx *ctx)
damon_for_each_target(t, ctx) {
damon_for_each_region(r, t) {
- __damon_pa_check_access(r, &ctx->attrs);
+ __damon_pa_check_access(
+ r, &ctx->attrs, ctx->addr_unit);
max_nr_accesses = max(r->nr_accesses, max_nr_accesses);
}
}
--
2.43.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [RFC PATCH mm-next v2 03/12] mm/damon/paddr: support addr_unit for DAMOS_PAGEOUT
2025-08-20 8:06 [RFC PATCH mm-next v2 00/12] mm/damon: support ARM32 with LPAE Quanmin Yan
2025-08-20 8:06 ` [RFC PATCH mm-next v2 01/12] mm/damon/core: add damon_ctx->addr_unit Quanmin Yan
2025-08-20 8:06 ` [RFC PATCH mm-next v2 02/12] mm/damon/paddr: support addr_unit for access monitoring Quanmin Yan
@ 2025-08-20 8:06 ` Quanmin Yan
2025-08-20 8:06 ` [RFC PATCH mm-next v2 04/12] mm/damon/paddr: support addr_unit for DAMOS_LRU_[DE]PRIO Quanmin Yan
` (9 subsequent siblings)
12 siblings, 0 replies; 18+ messages in thread
From: Quanmin Yan @ 2025-08-20 8:06 UTC (permalink / raw)
To: sj
Cc: akpm, damon, linux-kernel, linux-mm, yanquanmin1, wangkefeng.wang,
zuoze1
From: SeongJae Park <sj@kernel.org>
Add support of addr_unit for DAMOS_PAGEOUT action handling from the
DAMOS operation implementation for the physical address space.
Signed-off-by: SeongJae Park <sj@kernel.org>
Signed-off-by: Quanmin Yan <yanquanmin1@huawei.com>
---
mm/damon/paddr.c | 17 ++++++++++-------
1 file changed, 10 insertions(+), 7 deletions(-)
diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c
index d497373c2bd2..826c2064dbfd 100644
--- a/mm/damon/paddr.c
+++ b/mm/damon/paddr.c
@@ -135,10 +135,11 @@ static bool damon_pa_invalid_damos_folio(struct folio *folio, struct damos *s)
return false;
}
-static unsigned long damon_pa_pageout(struct damon_region *r, struct damos *s,
+static unsigned long damon_pa_pageout(struct damon_region *r,
+ unsigned long addr_unit, struct damos *s,
unsigned long *sz_filter_passed)
{
- unsigned long addr, applied;
+ phys_addr_t addr, applied;
LIST_HEAD(folio_list);
bool install_young_filter = true;
struct damos_filter *filter;
@@ -159,8 +160,8 @@ static unsigned long damon_pa_pageout(struct damon_region *r, struct damos *s,
damos_add_filter(s, filter);
}
- addr = r->ar.start;
- while (addr < r->ar.end) {
+ addr = damon_pa_phys_addr(r->ar.start, addr_unit);
+ while (addr < damon_pa_phys_addr(r->ar.end, addr_unit)) {
folio = damon_get_folio(PHYS_PFN(addr));
if (damon_pa_invalid_damos_folio(folio, s)) {
addr += PAGE_SIZE;
@@ -170,7 +171,7 @@ static unsigned long damon_pa_pageout(struct damon_region *r, struct damos *s,
if (damos_pa_filter_out(s, folio))
goto put_folio;
else
- *sz_filter_passed += folio_size(folio);
+ *sz_filter_passed += folio_size(folio) / addr_unit;
folio_clear_referenced(folio);
folio_test_clear_young(folio);
@@ -189,7 +190,7 @@ static unsigned long damon_pa_pageout(struct damon_region *r, struct damos *s,
applied = reclaim_pages(&folio_list);
cond_resched();
s->last_applied = folio;
- return applied * PAGE_SIZE;
+ return applied * PAGE_SIZE / addr_unit;
}
static inline unsigned long damon_pa_mark_accessed_or_deactivate(
@@ -302,9 +303,11 @@ static unsigned long damon_pa_apply_scheme(struct damon_ctx *ctx,
struct damon_target *t, struct damon_region *r,
struct damos *scheme, unsigned long *sz_filter_passed)
{
+ unsigned long aunit = ctx->addr_unit;
+
switch (scheme->action) {
case DAMOS_PAGEOUT:
- return damon_pa_pageout(r, scheme, sz_filter_passed);
+ return damon_pa_pageout(r, aunit, scheme, sz_filter_passed);
case DAMOS_LRU_PRIO:
return damon_pa_mark_accessed(r, scheme, sz_filter_passed);
case DAMOS_LRU_DEPRIO:
--
2.43.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [RFC PATCH mm-next v2 04/12] mm/damon/paddr: support addr_unit for DAMOS_LRU_[DE]PRIO
2025-08-20 8:06 [RFC PATCH mm-next v2 00/12] mm/damon: support ARM32 with LPAE Quanmin Yan
` (2 preceding siblings ...)
2025-08-20 8:06 ` [RFC PATCH mm-next v2 03/12] mm/damon/paddr: support addr_unit for DAMOS_PAGEOUT Quanmin Yan
@ 2025-08-20 8:06 ` Quanmin Yan
2025-08-20 8:06 ` [RFC PATCH mm-next v2 05/12] mm/damon/paddr: support addr_unit for MIGRATE_{HOT,COLD} Quanmin Yan
` (8 subsequent siblings)
12 siblings, 0 replies; 18+ messages in thread
From: Quanmin Yan @ 2025-08-20 8:06 UTC (permalink / raw)
To: sj
Cc: akpm, damon, linux-kernel, linux-mm, yanquanmin1, wangkefeng.wang,
zuoze1
From: SeongJae Park <sj@kernel.org>
Add support of addr_unit for DAMOS_LRU_PRIO and DAMOS_LRU_DEPRIO action
handling from the DAMOS operation implementation for the physical
address space.
Signed-off-by: SeongJae Park <sj@kernel.org>
Signed-off-by: Quanmin Yan <yanquanmin1@huawei.com>
---
mm/damon/paddr.c | 29 +++++++++++++++++------------
1 file changed, 17 insertions(+), 12 deletions(-)
diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c
index 826c2064dbfd..ed71dd0bf80e 100644
--- a/mm/damon/paddr.c
+++ b/mm/damon/paddr.c
@@ -194,14 +194,15 @@ static unsigned long damon_pa_pageout(struct damon_region *r,
}
static inline unsigned long damon_pa_mark_accessed_or_deactivate(
- struct damon_region *r, struct damos *s, bool mark_accessed,
+ struct damon_region *r, unsigned long addr_unit,
+ struct damos *s, bool mark_accessed,
unsigned long *sz_filter_passed)
{
- unsigned long addr, applied = 0;
+ phys_addr_t addr, applied = 0;
struct folio *folio;
- addr = r->ar.start;
- while (addr < r->ar.end) {
+ addr = damon_pa_phys_addr(r->ar.start, addr_unit);
+ while (addr < damon_pa_phys_addr(r->ar.end, addr_unit)) {
folio = damon_get_folio(PHYS_PFN(addr));
if (damon_pa_invalid_damos_folio(folio, s)) {
addr += PAGE_SIZE;
@@ -211,7 +212,7 @@ static inline unsigned long damon_pa_mark_accessed_or_deactivate(
if (damos_pa_filter_out(s, folio))
goto put_folio;
else
- *sz_filter_passed += folio_size(folio);
+ *sz_filter_passed += folio_size(folio) / addr_unit;
if (mark_accessed)
folio_mark_accessed(folio);
@@ -223,20 +224,22 @@ static inline unsigned long damon_pa_mark_accessed_or_deactivate(
folio_put(folio);
}
s->last_applied = folio;
- return applied * PAGE_SIZE;
+ return applied * PAGE_SIZE / addr_unit;
}
static unsigned long damon_pa_mark_accessed(struct damon_region *r,
- struct damos *s, unsigned long *sz_filter_passed)
+ unsigned long addr_unit, struct damos *s,
+ unsigned long *sz_filter_passed)
{
- return damon_pa_mark_accessed_or_deactivate(r, s, true,
+ return damon_pa_mark_accessed_or_deactivate(r, addr_unit, s, true,
sz_filter_passed);
}
static unsigned long damon_pa_deactivate_pages(struct damon_region *r,
- struct damos *s, unsigned long *sz_filter_passed)
+ unsigned long addr_unit, struct damos *s,
+ unsigned long *sz_filter_passed)
{
- return damon_pa_mark_accessed_or_deactivate(r, s, false,
+ return damon_pa_mark_accessed_or_deactivate(r, addr_unit, s, false,
sz_filter_passed);
}
@@ -309,9 +312,11 @@ static unsigned long damon_pa_apply_scheme(struct damon_ctx *ctx,
case DAMOS_PAGEOUT:
return damon_pa_pageout(r, aunit, scheme, sz_filter_passed);
case DAMOS_LRU_PRIO:
- return damon_pa_mark_accessed(r, scheme, sz_filter_passed);
+ return damon_pa_mark_accessed(r, aunit, scheme,
+ sz_filter_passed);
case DAMOS_LRU_DEPRIO:
- return damon_pa_deactivate_pages(r, scheme, sz_filter_passed);
+ return damon_pa_deactivate_pages(r, aunit, scheme,
+ sz_filter_passed);
case DAMOS_MIGRATE_HOT:
case DAMOS_MIGRATE_COLD:
return damon_pa_migrate(r, scheme, sz_filter_passed);
--
2.43.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [RFC PATCH mm-next v2 05/12] mm/damon/paddr: support addr_unit for MIGRATE_{HOT,COLD}
2025-08-20 8:06 [RFC PATCH mm-next v2 00/12] mm/damon: support ARM32 with LPAE Quanmin Yan
` (3 preceding siblings ...)
2025-08-20 8:06 ` [RFC PATCH mm-next v2 04/12] mm/damon/paddr: support addr_unit for DAMOS_LRU_[DE]PRIO Quanmin Yan
@ 2025-08-20 8:06 ` Quanmin Yan
2025-08-20 8:06 ` [RFC PATCH mm-next v2 06/12] mm/damon/paddr: support addr_unit for DAMOS_STAT Quanmin Yan
` (7 subsequent siblings)
12 siblings, 0 replies; 18+ messages in thread
From: Quanmin Yan @ 2025-08-20 8:06 UTC (permalink / raw)
To: sj
Cc: akpm, damon, linux-kernel, linux-mm, yanquanmin1, wangkefeng.wang,
zuoze1
From: SeongJae Park <sj@kernel.org>
Add support of addr_unit for DAMOS_MIGRATE_HOT and DAMOS_MIGRATE_COLD
action handling from the DAMOS operation implementation for the physical
address space.
Signed-off-by: SeongJae Park <sj@kernel.org>
Signed-off-by: Quanmin Yan <yanquanmin1@huawei.com>
---
mm/damon/paddr.c | 15 ++++++++-------
1 file changed, 8 insertions(+), 7 deletions(-)
diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c
index ed71dd0bf80e..0305e59818da 100644
--- a/mm/damon/paddr.c
+++ b/mm/damon/paddr.c
@@ -243,15 +243,16 @@ static unsigned long damon_pa_deactivate_pages(struct damon_region *r,
sz_filter_passed);
}
-static unsigned long damon_pa_migrate(struct damon_region *r, struct damos *s,
+static unsigned long damon_pa_migrate(struct damon_region *r,
+ unsigned long addr_unit, struct damos *s,
unsigned long *sz_filter_passed)
{
- unsigned long addr, applied;
+ phys_addr_t addr, applied;
LIST_HEAD(folio_list);
struct folio *folio;
- addr = r->ar.start;
- while (addr < r->ar.end) {
+ addr = damon_pa_phys_addr(r->ar.start, addr_unit);
+ while (addr < damon_pa_phys_addr(r->ar.end, addr_unit)) {
folio = damon_get_folio(PHYS_PFN(addr));
if (damon_pa_invalid_damos_folio(folio, s)) {
addr += PAGE_SIZE;
@@ -261,7 +262,7 @@ static unsigned long damon_pa_migrate(struct damon_region *r, struct damos *s,
if (damos_pa_filter_out(s, folio))
goto put_folio;
else
- *sz_filter_passed += folio_size(folio);
+ *sz_filter_passed += folio_size(folio) / addr_unit;
if (!folio_isolate_lru(folio))
goto put_folio;
@@ -273,7 +274,7 @@ static unsigned long damon_pa_migrate(struct damon_region *r, struct damos *s,
applied = damon_migrate_pages(&folio_list, s->target_nid);
cond_resched();
s->last_applied = folio;
- return applied * PAGE_SIZE;
+ return applied * PAGE_SIZE / addr_unit;
}
static unsigned long damon_pa_stat(struct damon_region *r, struct damos *s,
@@ -319,7 +320,7 @@ static unsigned long damon_pa_apply_scheme(struct damon_ctx *ctx,
sz_filter_passed);
case DAMOS_MIGRATE_HOT:
case DAMOS_MIGRATE_COLD:
- return damon_pa_migrate(r, scheme, sz_filter_passed);
+ return damon_pa_migrate(r, aunit, scheme, sz_filter_passed);
case DAMOS_STAT:
return damon_pa_stat(r, scheme, sz_filter_passed);
default:
--
2.43.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [RFC PATCH mm-next v2 06/12] mm/damon/paddr: support addr_unit for DAMOS_STAT
2025-08-20 8:06 [RFC PATCH mm-next v2 00/12] mm/damon: support ARM32 with LPAE Quanmin Yan
` (4 preceding siblings ...)
2025-08-20 8:06 ` [RFC PATCH mm-next v2 05/12] mm/damon/paddr: support addr_unit for MIGRATE_{HOT,COLD} Quanmin Yan
@ 2025-08-20 8:06 ` Quanmin Yan
2025-08-20 8:06 ` [RFC PATCH mm-next v2 07/12] mm/damon/sysfs: implement addr_unit file under context dir Quanmin Yan
` (6 subsequent siblings)
12 siblings, 0 replies; 18+ messages in thread
From: Quanmin Yan @ 2025-08-20 8:06 UTC (permalink / raw)
To: sj
Cc: akpm, damon, linux-kernel, linux-mm, yanquanmin1, wangkefeng.wang,
zuoze1
From: SeongJae Park <sj@kernel.org>
Add support of addr_unit for DAMOS_STAT action handling from the DAMOS
operation implementation for the physical address space.
Signed-off-by: SeongJae Park <sj@kernel.org>
Signed-off-by: Quanmin Yan <yanquanmin1@huawei.com>
---
mm/damon/paddr.c | 13 +++++++------
1 file changed, 7 insertions(+), 6 deletions(-)
diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c
index 0305e59818da..5fad2f9a99a0 100644
--- a/mm/damon/paddr.c
+++ b/mm/damon/paddr.c
@@ -277,17 +277,18 @@ static unsigned long damon_pa_migrate(struct damon_region *r,
return applied * PAGE_SIZE / addr_unit;
}
-static unsigned long damon_pa_stat(struct damon_region *r, struct damos *s,
+static unsigned long damon_pa_stat(struct damon_region *r,
+ unsigned long addr_unit, struct damos *s,
unsigned long *sz_filter_passed)
{
- unsigned long addr;
+ phys_addr_t addr;
struct folio *folio;
if (!damos_ops_has_filter(s))
return 0;
- addr = r->ar.start;
- while (addr < r->ar.end) {
+ addr = damon_pa_phys_addr(r->ar.start, addr_unit);
+ while (addr < damon_pa_phys_addr(r->ar.end, addr_unit)) {
folio = damon_get_folio(PHYS_PFN(addr));
if (damon_pa_invalid_damos_folio(folio, s)) {
addr += PAGE_SIZE;
@@ -295,7 +296,7 @@ static unsigned long damon_pa_stat(struct damon_region *r, struct damos *s,
}
if (!damos_pa_filter_out(s, folio))
- *sz_filter_passed += folio_size(folio);
+ *sz_filter_passed += folio_size(folio) / addr_unit;
addr += folio_size(folio);
folio_put(folio);
}
@@ -322,7 +323,7 @@ static unsigned long damon_pa_apply_scheme(struct damon_ctx *ctx,
case DAMOS_MIGRATE_COLD:
return damon_pa_migrate(r, aunit, scheme, sz_filter_passed);
case DAMOS_STAT:
- return damon_pa_stat(r, scheme, sz_filter_passed);
+ return damon_pa_stat(r, aunit, scheme, sz_filter_passed);
default:
/* DAMOS actions that not yet supported by 'paddr'. */
break;
--
2.43.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [RFC PATCH mm-next v2 07/12] mm/damon/sysfs: implement addr_unit file under context dir
2025-08-20 8:06 [RFC PATCH mm-next v2 00/12] mm/damon: support ARM32 with LPAE Quanmin Yan
` (5 preceding siblings ...)
2025-08-20 8:06 ` [RFC PATCH mm-next v2 06/12] mm/damon/paddr: support addr_unit for DAMOS_STAT Quanmin Yan
@ 2025-08-20 8:06 ` Quanmin Yan
2025-08-20 8:06 ` [RFC PATCH mm-next v2 08/12] Docs/mm/damon/design: document 'address unit' parameter Quanmin Yan
` (5 subsequent siblings)
12 siblings, 0 replies; 18+ messages in thread
From: Quanmin Yan @ 2025-08-20 8:06 UTC (permalink / raw)
To: sj
Cc: akpm, damon, linux-kernel, linux-mm, yanquanmin1, wangkefeng.wang,
zuoze1
From: SeongJae Park <sj@kernel.org>
Only DAMON kernel API callers can use addr_unit parameter. Implement a
sysfs file to let DAMON sysfs ABI users use it.
Additionally, addr_unit must be set to a non-zero value.
Signed-off-by: SeongJae Park <sj@kernel.org>
Signed-off-by: Quanmin Yan <yanquanmin1@huawei.com>
---
mm/damon/sysfs.c | 33 +++++++++++++++++++++++++++++++++
1 file changed, 33 insertions(+)
diff --git a/mm/damon/sysfs.c b/mm/damon/sysfs.c
index 6d2b0dab50cb..98bf15d403b2 100644
--- a/mm/damon/sysfs.c
+++ b/mm/damon/sysfs.c
@@ -834,6 +834,7 @@ static const struct damon_sysfs_ops_name damon_sysfs_ops_names[] = {
struct damon_sysfs_context {
struct kobject kobj;
enum damon_ops_id ops_id;
+ unsigned long addr_unit;
struct damon_sysfs_attrs *attrs;
struct damon_sysfs_targets *targets;
struct damon_sysfs_schemes *schemes;
@@ -849,6 +850,7 @@ static struct damon_sysfs_context *damon_sysfs_context_alloc(
return NULL;
context->kobj = (struct kobject){};
context->ops_id = ops_id;
+ context->addr_unit = 1;
return context;
}
@@ -997,6 +999,32 @@ static ssize_t operations_store(struct kobject *kobj,
return -EINVAL;
}
+static ssize_t addr_unit_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ struct damon_sysfs_context *context = container_of(kobj,
+ struct damon_sysfs_context, kobj);
+
+ return sysfs_emit(buf, "%lu\n", context->addr_unit);
+}
+
+static ssize_t addr_unit_store(struct kobject *kobj,
+ struct kobj_attribute *attr, const char *buf, size_t count)
+{
+ struct damon_sysfs_context *context = container_of(kobj,
+ struct damon_sysfs_context, kobj);
+ unsigned long input_addr_unit;
+ int err = kstrtoul(buf, 0, &input_addr_unit);
+
+ if (err)
+ return err;
+ if (!input_addr_unit)
+ return -EINVAL;
+
+ context->addr_unit = input_addr_unit;
+ return count;
+}
+
static void damon_sysfs_context_release(struct kobject *kobj)
{
kfree(container_of(kobj, struct damon_sysfs_context, kobj));
@@ -1008,9 +1036,13 @@ static struct kobj_attribute damon_sysfs_context_avail_operations_attr =
static struct kobj_attribute damon_sysfs_context_operations_attr =
__ATTR_RW_MODE(operations, 0600);
+static struct kobj_attribute damon_sysfs_context_addr_unit_attr =
+ __ATTR_RW_MODE(addr_unit, 0600);
+
static struct attribute *damon_sysfs_context_attrs[] = {
&damon_sysfs_context_avail_operations_attr.attr,
&damon_sysfs_context_operations_attr.attr,
+ &damon_sysfs_context_addr_unit_attr.attr,
NULL,
};
ATTRIBUTE_GROUPS(damon_sysfs_context);
@@ -1397,6 +1429,7 @@ static int damon_sysfs_apply_inputs(struct damon_ctx *ctx,
err = damon_select_ops(ctx, sys_ctx->ops_id);
if (err)
return err;
+ ctx->addr_unit = sys_ctx->addr_unit;
err = damon_sysfs_set_attrs(ctx, sys_ctx->attrs);
if (err)
return err;
--
2.43.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [RFC PATCH mm-next v2 08/12] Docs/mm/damon/design: document 'address unit' parameter
2025-08-20 8:06 [RFC PATCH mm-next v2 00/12] mm/damon: support ARM32 with LPAE Quanmin Yan
` (6 preceding siblings ...)
2025-08-20 8:06 ` [RFC PATCH mm-next v2 07/12] mm/damon/sysfs: implement addr_unit file under context dir Quanmin Yan
@ 2025-08-20 8:06 ` Quanmin Yan
2025-08-20 8:06 ` [RFC PATCH mm-next v2 09/12] Docs/admin-guide/mm/damon/usage: document addr_unit file Quanmin Yan
` (4 subsequent siblings)
12 siblings, 0 replies; 18+ messages in thread
From: Quanmin Yan @ 2025-08-20 8:06 UTC (permalink / raw)
To: sj
Cc: akpm, damon, linux-kernel, linux-mm, yanquanmin1, wangkefeng.wang,
zuoze1
From: SeongJae Park <sj@kernel.org>
Add 'addr_unit' parameter description on DAMON design document.
Signed-off-by: SeongJae Park <sj@kernel.org>
Signed-off-by: Quanmin Yan <yanquanmin1@huawei.com>
---
Documentation/mm/damon/design.rst | 14 +++++++++++++-
1 file changed, 13 insertions(+), 1 deletion(-)
diff --git a/Documentation/mm/damon/design.rst b/Documentation/mm/damon/design.rst
index 2f6ba5c7f4c7..d9d5baa1ec87 100644
--- a/Documentation/mm/damon/design.rst
+++ b/Documentation/mm/damon/design.rst
@@ -67,7 +67,7 @@ processes, NUMA nodes, files, and backing memory devices would be supportable.
Also, if some architectures or devices support special optimized access check
features, those will be easily configurable.
-DAMON currently provides below three operation sets. Below two subsections
+DAMON currently provides below three operation sets. Below three subsections
describe how those work.
- vaddr: Monitor virtual address spaces of specific processes
@@ -135,6 +135,18 @@ the interference is the responsibility of sysadmins. However, it solves the
conflict with the reclaim logic using ``PG_idle`` and ``PG_young`` page flags,
as Idle page tracking does.
+Address Unit
+------------
+
+DAMON core layer uses ``unsinged long`` type for monitoring target address
+ranges. In some cases, the address space for a given operations set could be
+too large to be handled with the type. ARM (32-bit) with large physical
+address extension is an example. For such cases, a per-operations set
+parameter called ``address unit`` is provided. It represents the scale factor
+that need to be multiplied to the core layer's address for calculating real
+address on the given address space. Support of ``address unit`` parameter is
+up to each operations set implementation. ``paddr`` is the only operations set
+implementation that supports the parameter.
.. _damon_core_logic:
--
2.43.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [RFC PATCH mm-next v2 09/12] Docs/admin-guide/mm/damon/usage: document addr_unit file
2025-08-20 8:06 [RFC PATCH mm-next v2 00/12] mm/damon: support ARM32 with LPAE Quanmin Yan
` (7 preceding siblings ...)
2025-08-20 8:06 ` [RFC PATCH mm-next v2 08/12] Docs/mm/damon/design: document 'address unit' parameter Quanmin Yan
@ 2025-08-20 8:06 ` Quanmin Yan
2025-08-20 8:06 ` [RFC PATCH mm-next v2 10/12] Docs/ABI/damon: " Quanmin Yan
` (3 subsequent siblings)
12 siblings, 0 replies; 18+ messages in thread
From: Quanmin Yan @ 2025-08-20 8:06 UTC (permalink / raw)
To: sj
Cc: akpm, damon, linux-kernel, linux-mm, yanquanmin1, wangkefeng.wang,
zuoze1
From: SeongJae Park <sj@kernel.org>
Document addr_unit DAMON sysfs file on DAMON usage document.
Signed-off-by: SeongJae Park <sj@kernel.org>
Signed-off-by: Quanmin Yan <yanquanmin1@huawei.com>
---
Documentation/admin-guide/mm/damon/usage.rst | 11 +++++++----
Documentation/mm/damon/design.rst | 2 ++
2 files changed, 9 insertions(+), 4 deletions(-)
diff --git a/Documentation/admin-guide/mm/damon/usage.rst b/Documentation/admin-guide/mm/damon/usage.rst
index ff3a2dda1f02..2cae60b6f3ca 100644
--- a/Documentation/admin-guide/mm/damon/usage.rst
+++ b/Documentation/admin-guide/mm/damon/usage.rst
@@ -61,7 +61,7 @@ comma (",").
│ :ref:`kdamonds <sysfs_kdamonds>`/nr_kdamonds
│ │ :ref:`0 <sysfs_kdamond>`/state,pid,refresh_ms
│ │ │ :ref:`contexts <sysfs_contexts>`/nr_contexts
- │ │ │ │ :ref:`0 <sysfs_context>`/avail_operations,operations
+ │ │ │ │ :ref:`0 <sysfs_context>`/avail_operations,operations,addr_unit
│ │ │ │ │ :ref:`monitoring_attrs <sysfs_monitoring_attrs>`/
│ │ │ │ │ │ intervals/sample_us,aggr_us,update_us
│ │ │ │ │ │ │ intervals_goal/access_bp,aggrs,min_sample_us,max_sample_us
@@ -188,9 +188,9 @@ details). At the moment, only one context per kdamond is supported, so only
contexts/<N>/
-------------
-In each context directory, two files (``avail_operations`` and ``operations``)
-and three directories (``monitoring_attrs``, ``targets``, and ``schemes``)
-exist.
+In each context directory, three files (``avail_operations``, ``operations``
+and ``addr_unit``) and three directories (``monitoring_attrs``, ``targets``,
+and ``schemes``) exist.
DAMON supports multiple types of :ref:`monitoring operations
<damon_design_configurable_operations_set>`, including those for virtual address
@@ -205,6 +205,9 @@ You can set and get what type of monitoring operations DAMON will use for the
context by writing one of the keywords listed in ``avail_operations`` file and
reading from the ``operations`` file.
+``addr_unit`` file is for setting and getting the :ref:`address unit
+<damon_design_addr_unit>` parameter of the operations set.
+
.. _sysfs_monitoring_attrs:
contexts/<N>/monitoring_attrs/
diff --git a/Documentation/mm/damon/design.rst b/Documentation/mm/damon/design.rst
index d9d5baa1ec87..80354f4f42ba 100644
--- a/Documentation/mm/damon/design.rst
+++ b/Documentation/mm/damon/design.rst
@@ -135,6 +135,8 @@ the interference is the responsibility of sysadmins. However, it solves the
conflict with the reclaim logic using ``PG_idle`` and ``PG_young`` page flags,
as Idle page tracking does.
+.. _damon_design_addr_unit:
+
Address Unit
------------
--
2.43.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [RFC PATCH mm-next v2 10/12] Docs/ABI/damon: document addr_unit file
2025-08-20 8:06 [RFC PATCH mm-next v2 00/12] mm/damon: support ARM32 with LPAE Quanmin Yan
` (8 preceding siblings ...)
2025-08-20 8:06 ` [RFC PATCH mm-next v2 09/12] Docs/admin-guide/mm/damon/usage: document addr_unit file Quanmin Yan
@ 2025-08-20 8:06 ` Quanmin Yan
2025-08-20 21:37 ` SeongJae Park
2025-08-20 8:06 ` [RFC PATCH mm-next v2 11/12] mm/damon: add damon_ctx->min_region Quanmin Yan
` (2 subsequent siblings)
12 siblings, 1 reply; 18+ messages in thread
From: Quanmin Yan @ 2025-08-20 8:06 UTC (permalink / raw)
To: sj
Cc: akpm, damon, linux-kernel, linux-mm, yanquanmin1, wangkefeng.wang,
zuoze1
From: SeongJae Park <sj@kernel.org>
Document addr_unit DAMON sysfs file on DAMON ABI document.
Signed-off-by: SeongJae Park <sj@kernel.org>
Signed-off-by: Quanmin Yan <yanquanmin1@huawei.com>
---
Documentation/ABI/testing/sysfs-kernel-mm-damon | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/Documentation/ABI/testing/sysfs-kernel-mm-damon b/Documentation/ABI/testing/sysfs-kernel-mm-damon
index 6791d879759e..cf4d66bd119d 100644
--- a/Documentation/ABI/testing/sysfs-kernel-mm-damon
+++ b/Documentation/ABI/testing/sysfs-kernel-mm-damon
@@ -77,6 +77,13 @@ Description: Writing a keyword for a monitoring operations set ('vaddr' for
Note that only the operations sets that listed in
'avail_operations' file are valid inputs.
+What: /sys/kernel/mm/damon/admin/kdamonds/<K>/contexts/<C>/addr_unit
+Date: Apr 2025
+Contact: SeongJae Park <sj@kernel.org>
+Description: Writing an integer to this file sets the 'address unit'
+ parameter of the given operations set of the context. Reading
+ the file returns the last-written 'address unit' value.
+
What: /sys/kernel/mm/damon/admin/kdamonds/<K>/contexts/<C>/monitoring_attrs/intervals/sample_us
Date: Mar 2022
Contact: SeongJae Park <sj@kernel.org>
--
2.43.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [RFC PATCH mm-next v2 11/12] mm/damon: add damon_ctx->min_region
2025-08-20 8:06 [RFC PATCH mm-next v2 00/12] mm/damon: support ARM32 with LPAE Quanmin Yan
` (9 preceding siblings ...)
2025-08-20 8:06 ` [RFC PATCH mm-next v2 10/12] Docs/ABI/damon: " Quanmin Yan
@ 2025-08-20 8:06 ` Quanmin Yan
2025-08-20 21:56 ` SeongJae Park
2025-08-20 8:06 ` [RFC PATCH mm-next v2 12/12] mm/damon/core: prevent unnecessary overflow in damos_set_effective_quota() Quanmin Yan
2025-08-20 22:23 ` [RFC PATCH mm-next v2 00/12] mm/damon: support ARM32 with LPAE SeongJae Park
12 siblings, 1 reply; 18+ messages in thread
From: Quanmin Yan @ 2025-08-20 8:06 UTC (permalink / raw)
To: sj
Cc: akpm, damon, linux-kernel, linux-mm, yanquanmin1, wangkefeng.wang,
zuoze1
Adopting addr_unit would make DAMON_MINREGION 'addr_unit * 4096'
bytes and cause data alignment issues[1].
Add damon_ctx->min_region to change DAMON_MIN_REGION from a global
macro value to per-context variable.
[1] https://lore.kernel.org/all/527714dd-0e33-43ab-bbbd-d89670ba79e7@huawei.com
Signed-off-by: Quanmin Yan <yanquanmin1@huawei.com>
---
include/linux/damon.h | 4 ++-
mm/damon/core.c | 68 ++++++++++++++++++++----------------
mm/damon/sysfs.c | 8 +++--
mm/damon/tests/core-kunit.h | 16 ++++-----
mm/damon/tests/vaddr-kunit.h | 2 +-
mm/damon/vaddr.c | 2 +-
6 files changed, 56 insertions(+), 44 deletions(-)
diff --git a/include/linux/damon.h b/include/linux/damon.h
index 6fa52f7495d9..bebd791f37f1 100644
--- a/include/linux/damon.h
+++ b/include/linux/damon.h
@@ -747,6 +747,7 @@ struct damon_attrs {
*
* @ops: Set of monitoring operations for given use cases.
* @addr_unit: Scale factor for core to ops address conversion.
+ * @min_region: Minimum Region Size.
* @adaptive_targets: Head of monitoring targets (&damon_target) list.
* @schemes: Head of schemes (&damos) list.
*/
@@ -789,6 +790,7 @@ struct damon_ctx {
struct damon_operations ops;
unsigned long addr_unit;
+ unsigned long min_region;
struct list_head adaptive_targets;
struct list_head schemes;
@@ -877,7 +879,7 @@ static inline void damon_insert_region(struct damon_region *r,
void damon_add_region(struct damon_region *r, struct damon_target *t);
void damon_destroy_region(struct damon_region *r, struct damon_target *t);
int damon_set_regions(struct damon_target *t, struct damon_addr_range *ranges,
- unsigned int nr_ranges);
+ unsigned int nr_ranges, unsigned long min_region);
void damon_update_region_access_rate(struct damon_region *r, bool accessed,
struct damon_attrs *attrs);
diff --git a/mm/damon/core.c b/mm/damon/core.c
index 8f8aa84953ac..980e271e42e9 100644
--- a/mm/damon/core.c
+++ b/mm/damon/core.c
@@ -208,7 +208,7 @@ static int damon_fill_regions_holes(struct damon_region *first,
* Return: 0 if success, or negative error code otherwise.
*/
int damon_set_regions(struct damon_target *t, struct damon_addr_range *ranges,
- unsigned int nr_ranges)
+ unsigned int nr_ranges, unsigned long min_region)
{
struct damon_region *r, *next;
unsigned int i;
@@ -245,16 +245,16 @@ int damon_set_regions(struct damon_target *t, struct damon_addr_range *ranges,
/* no region intersects with this range */
newr = damon_new_region(
ALIGN_DOWN(range->start,
- DAMON_MIN_REGION),
- ALIGN(range->end, DAMON_MIN_REGION));
+ min_region),
+ ALIGN(range->end, min_region));
if (!newr)
return -ENOMEM;
damon_insert_region(newr, damon_prev_region(r), r, t);
} else {
/* resize intersecting regions to fit in this range */
first->ar.start = ALIGN_DOWN(range->start,
- DAMON_MIN_REGION);
- last->ar.end = ALIGN(range->end, DAMON_MIN_REGION);
+ min_region);
+ last->ar.end = ALIGN(range->end, min_region);
/* fill possible holes in the range */
err = damon_fill_regions_holes(first, last, t);
@@ -545,6 +545,7 @@ struct damon_ctx *damon_new_ctx(void)
ctx->attrs.max_nr_regions = 1000;
ctx->addr_unit = 1;
+ ctx->min_region = DAMON_MIN_REGION;
INIT_LIST_HEAD(&ctx->adaptive_targets);
INIT_LIST_HEAD(&ctx->schemes);
@@ -1127,8 +1128,8 @@ static struct damon_target *damon_nth_target(int n, struct damon_ctx *ctx)
*
* If @src has no region, @dst keeps current regions.
*/
-static int damon_commit_target_regions(
- struct damon_target *dst, struct damon_target *src)
+static int damon_commit_target_regions(struct damon_target *dst,
+ struct damon_target *src, unsigned long src_min_region)
{
struct damon_region *src_region;
struct damon_addr_range *ranges;
@@ -1145,18 +1146,19 @@ static int damon_commit_target_regions(
i = 0;
damon_for_each_region(src_region, src)
ranges[i++] = src_region->ar;
- err = damon_set_regions(dst, ranges, i);
+ err = damon_set_regions(dst, ranges, i, src_min_region);
kfree(ranges);
return err;
}
static int damon_commit_target(
struct damon_target *dst, bool dst_has_pid,
- struct damon_target *src, bool src_has_pid)
+ struct damon_target *src, bool src_has_pid,
+ unsigned long src_min_region)
{
int err;
- err = damon_commit_target_regions(dst, src);
+ err = damon_commit_target_regions(dst, src, src_min_region);
if (err)
return err;
if (dst_has_pid)
@@ -1178,7 +1180,8 @@ static int damon_commit_targets(
if (src_target) {
err = damon_commit_target(
dst_target, damon_target_has_pid(dst),
- src_target, damon_target_has_pid(src));
+ src_target, damon_target_has_pid(src),
+ src->min_region);
if (err)
return err;
} else {
@@ -1201,7 +1204,8 @@ static int damon_commit_targets(
if (!new_target)
return -ENOMEM;
err = damon_commit_target(new_target, false,
- src_target, damon_target_has_pid(src));
+ src_target, damon_target_has_pid(src),
+ src->min_region);
if (err) {
damon_destroy_target(new_target, NULL);
return err;
@@ -1248,6 +1252,7 @@ int damon_commit_ctx(struct damon_ctx *dst, struct damon_ctx *src)
}
dst->ops = src->ops;
dst->addr_unit = src->addr_unit;
+ dst->min_region = max(DAMON_MIN_REGION / dst->addr_unit, 1);
return 0;
}
@@ -1280,8 +1285,8 @@ static unsigned long damon_region_sz_limit(struct damon_ctx *ctx)
if (ctx->attrs.min_nr_regions)
sz /= ctx->attrs.min_nr_regions;
- if (sz < DAMON_MIN_REGION)
- sz = DAMON_MIN_REGION;
+ if (sz < ctx->min_region)
+ sz = ctx->min_region;
return sz;
}
@@ -1641,8 +1646,9 @@ static bool damos_valid_target(struct damon_ctx *c, struct damon_target *t,
*
* Return: true if the region should be entirely skipped, false otherwise.
*/
-static bool damos_skip_charged_region(struct damon_target *t,
- struct damon_region **rp, struct damos *s)
+static bool damos_skip_charged_region(
+ struct damon_target *t, struct damon_region **rp,
+ struct damos *s, unsigned long min_region)
{
struct damon_region *r = *rp;
struct damos_quota *quota = &s->quota;
@@ -1664,11 +1670,11 @@ static bool damos_skip_charged_region(struct damon_target *t,
if (quota->charge_addr_from && r->ar.start <
quota->charge_addr_from) {
sz_to_skip = ALIGN_DOWN(quota->charge_addr_from -
- r->ar.start, DAMON_MIN_REGION);
+ r->ar.start, min_region);
if (!sz_to_skip) {
- if (damon_sz_region(r) <= DAMON_MIN_REGION)
+ if (damon_sz_region(r) <= min_region)
return true;
- sz_to_skip = DAMON_MIN_REGION;
+ sz_to_skip = min_region;
}
damon_split_region_at(t, r, sz_to_skip);
r = damon_next_region(r);
@@ -1693,7 +1699,8 @@ static void damos_update_stat(struct damos *s,
}
static bool damos_filter_match(struct damon_ctx *ctx, struct damon_target *t,
- struct damon_region *r, struct damos_filter *filter)
+ struct damon_region *r, struct damos_filter *filter,
+ unsigned long min_region)
{
bool matched = false;
struct damon_target *ti;
@@ -1710,8 +1717,8 @@ static bool damos_filter_match(struct damon_ctx *ctx, struct damon_target *t,
matched = target_idx == filter->target_idx;
break;
case DAMOS_FILTER_TYPE_ADDR:
- start = ALIGN_DOWN(filter->addr_range.start, DAMON_MIN_REGION);
- end = ALIGN_DOWN(filter->addr_range.end, DAMON_MIN_REGION);
+ start = ALIGN_DOWN(filter->addr_range.start, min_region);
+ end = ALIGN_DOWN(filter->addr_range.end, min_region);
/* inside the range */
if (start <= r->ar.start && r->ar.end <= end) {
@@ -1747,7 +1754,7 @@ static bool damos_filter_out(struct damon_ctx *ctx, struct damon_target *t,
s->core_filters_allowed = false;
damos_for_each_filter(filter, s) {
- if (damos_filter_match(ctx, t, r, filter)) {
+ if (damos_filter_match(ctx, t, r, filter, ctx->min_region)) {
if (filter->allow)
s->core_filters_allowed = true;
return !filter->allow;
@@ -1882,7 +1889,7 @@ static void damos_apply_scheme(struct damon_ctx *c, struct damon_target *t,
if (c->ops.apply_scheme) {
if (quota->esz && quota->charged_sz + sz > quota->esz) {
sz = ALIGN_DOWN(quota->esz - quota->charged_sz,
- DAMON_MIN_REGION);
+ c->min_region);
if (!sz)
goto update_stat;
damon_split_region_at(t, r, sz);
@@ -1930,7 +1937,7 @@ static void damon_do_apply_schemes(struct damon_ctx *c,
if (quota->esz && quota->charged_sz >= quota->esz)
continue;
- if (damos_skip_charged_region(t, &r, s))
+ if (damos_skip_charged_region(t, &r, s, c->min_region))
continue;
if (!damos_valid_target(c, t, r, s))
@@ -2324,7 +2331,8 @@ static void damon_split_region_at(struct damon_target *t,
}
/* Split every region in the given target into 'nr_subs' regions */
-static void damon_split_regions_of(struct damon_target *t, int nr_subs)
+static void damon_split_regions_of(struct damon_target *t,
+ int nr_subs, unsigned long min_region)
{
struct damon_region *r, *next;
unsigned long sz_region, sz_sub = 0;
@@ -2334,13 +2342,13 @@ static void damon_split_regions_of(struct damon_target *t, int nr_subs)
sz_region = damon_sz_region(r);
for (i = 0; i < nr_subs - 1 &&
- sz_region > 2 * DAMON_MIN_REGION; i++) {
+ sz_region > 2 * min_region; i++) {
/*
* Randomly select size of left sub-region to be at
* least 10 percent and at most 90% of original region
*/
sz_sub = ALIGN_DOWN(damon_rand(1, 10) *
- sz_region / 10, DAMON_MIN_REGION);
+ sz_region / 10, min_region);
/* Do not allow blank region */
if (sz_sub == 0 || sz_sub >= sz_region)
continue;
@@ -2380,7 +2388,7 @@ static void kdamond_split_regions(struct damon_ctx *ctx)
nr_subregions = 3;
damon_for_each_target(t, ctx)
- damon_split_regions_of(t, nr_subregions);
+ damon_split_regions_of(t, nr_subregions, ctx->min_region);
last_nr_regions = nr_regions;
}
@@ -2769,7 +2777,7 @@ int damon_set_region_biggest_system_ram_default(struct damon_target *t,
addr_range.start = *start;
addr_range.end = *end;
- return damon_set_regions(t, &addr_range, 1);
+ return damon_set_regions(t, &addr_range, 1, DAMON_MIN_REGION);
}
/*
diff --git a/mm/damon/sysfs.c b/mm/damon/sysfs.c
index 98bf15d403b2..840b3a73147a 100644
--- a/mm/damon/sysfs.c
+++ b/mm/damon/sysfs.c
@@ -1329,7 +1329,8 @@ static int damon_sysfs_set_attrs(struct damon_ctx *ctx,
}
static int damon_sysfs_set_regions(struct damon_target *t,
- struct damon_sysfs_regions *sysfs_regions)
+ struct damon_sysfs_regions *sysfs_regions,
+ unsigned long min_region)
{
struct damon_addr_range *ranges = kmalloc_array(sysfs_regions->nr,
sizeof(*ranges), GFP_KERNEL | __GFP_NOWARN);
@@ -1351,7 +1352,7 @@ static int damon_sysfs_set_regions(struct damon_target *t,
if (ranges[i - 1].end > ranges[i].start)
goto out;
}
- err = damon_set_regions(t, ranges, sysfs_regions->nr);
+ err = damon_set_regions(t, ranges, sysfs_regions->nr, min_region);
out:
kfree(ranges);
return err;
@@ -1372,7 +1373,7 @@ static int damon_sysfs_add_target(struct damon_sysfs_target *sys_target,
/* caller will destroy targets */
return -EINVAL;
}
- return damon_sysfs_set_regions(t, sys_target->regions);
+ return damon_sysfs_set_regions(t, sys_target->regions, ctx->min_region);
}
static int damon_sysfs_add_targets(struct damon_ctx *ctx,
@@ -1430,6 +1431,7 @@ static int damon_sysfs_apply_inputs(struct damon_ctx *ctx,
if (err)
return err;
ctx->addr_unit = sys_ctx->addr_unit;
+ ctx->min_region = max(DAMON_MIN_REGION / ctx->addr_unit, 1);
err = damon_sysfs_set_attrs(ctx, sys_ctx->attrs);
if (err)
return err;
diff --git a/mm/damon/tests/core-kunit.h b/mm/damon/tests/core-kunit.h
index 5f5dc9db2e90..a7fa078da405 100644
--- a/mm/damon/tests/core-kunit.h
+++ b/mm/damon/tests/core-kunit.h
@@ -230,14 +230,14 @@ static void damon_test_split_regions_of(struct kunit *test)
t = damon_new_target();
r = damon_new_region(0, 22);
damon_add_region(r, t);
- damon_split_regions_of(t, 2);
+ damon_split_regions_of(t, 2, DAMON_MIN_REGION);
KUNIT_EXPECT_LE(test, damon_nr_regions(t), 2u);
damon_free_target(t);
t = damon_new_target();
r = damon_new_region(0, 220);
damon_add_region(r, t);
- damon_split_regions_of(t, 4);
+ damon_split_regions_of(t, 4, DAMON_MIN_REGION);
KUNIT_EXPECT_LE(test, damon_nr_regions(t), 4u);
damon_free_target(t);
damon_destroy_ctx(c);
@@ -303,7 +303,7 @@ static void damon_test_set_regions(struct kunit *test)
damon_add_region(r1, t);
damon_add_region(r2, t);
- damon_set_regions(t, &range, 1);
+ damon_set_regions(t, &range, 1, DAMON_MIN_REGION);
KUNIT_EXPECT_EQ(test, damon_nr_regions(t), 3);
damon_for_each_region(r, t) {
@@ -450,25 +450,25 @@ static void damos_test_filter_out(struct kunit *test)
damon_add_region(r, t);
/* region in the range */
- KUNIT_EXPECT_TRUE(test, damos_filter_match(NULL, t, r, f));
+ KUNIT_EXPECT_TRUE(test, damos_filter_match(NULL, t, r, f, DAMON_MIN_REGION));
KUNIT_EXPECT_EQ(test, damon_nr_regions(t), 1);
/* region before the range */
r->ar.start = DAMON_MIN_REGION * 1;
r->ar.end = DAMON_MIN_REGION * 2;
- KUNIT_EXPECT_FALSE(test, damos_filter_match(NULL, t, r, f));
+ KUNIT_EXPECT_FALSE(test, damos_filter_match(NULL, t, r, f, DAMON_MIN_REGION));
KUNIT_EXPECT_EQ(test, damon_nr_regions(t), 1);
/* region after the range */
r->ar.start = DAMON_MIN_REGION * 6;
r->ar.end = DAMON_MIN_REGION * 8;
- KUNIT_EXPECT_FALSE(test, damos_filter_match(NULL, t, r, f));
+ KUNIT_EXPECT_FALSE(test, damos_filter_match(NULL, t, r, f, DAMON_MIN_REGION));
KUNIT_EXPECT_EQ(test, damon_nr_regions(t), 1);
/* region started before the range */
r->ar.start = DAMON_MIN_REGION * 1;
r->ar.end = DAMON_MIN_REGION * 4;
- KUNIT_EXPECT_FALSE(test, damos_filter_match(NULL, t, r, f));
+ KUNIT_EXPECT_FALSE(test, damos_filter_match(NULL, t, r, f, DAMON_MIN_REGION));
/* filter should have split the region */
KUNIT_EXPECT_EQ(test, r->ar.start, DAMON_MIN_REGION * 1);
KUNIT_EXPECT_EQ(test, r->ar.end, DAMON_MIN_REGION * 2);
@@ -481,7 +481,7 @@ static void damos_test_filter_out(struct kunit *test)
/* region started in the range */
r->ar.start = DAMON_MIN_REGION * 2;
r->ar.end = DAMON_MIN_REGION * 8;
- KUNIT_EXPECT_TRUE(test, damos_filter_match(NULL, t, r, f));
+ KUNIT_EXPECT_TRUE(test, damos_filter_match(NULL, t, r, f, DAMON_MIN_REGION));
/* filter should have split the region */
KUNIT_EXPECT_EQ(test, r->ar.start, DAMON_MIN_REGION * 2);
KUNIT_EXPECT_EQ(test, r->ar.end, DAMON_MIN_REGION * 6);
diff --git a/mm/damon/tests/vaddr-kunit.h b/mm/damon/tests/vaddr-kunit.h
index d2b37ccf2cc0..fce38dd53cf8 100644
--- a/mm/damon/tests/vaddr-kunit.h
+++ b/mm/damon/tests/vaddr-kunit.h
@@ -141,7 +141,7 @@ static void damon_do_test_apply_three_regions(struct kunit *test,
damon_add_region(r, t);
}
- damon_set_regions(t, three_regions, 3);
+ damon_set_regions(t, three_regions, 3, DAMON_MIN_REGION);
for (i = 0; i < nr_expected / 2; i++) {
r = __nth_region_of(t, i);
diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c
index 66ef9869eafe..8c048f9b129e 100644
--- a/mm/damon/vaddr.c
+++ b/mm/damon/vaddr.c
@@ -299,7 +299,7 @@ static void damon_va_update(struct damon_ctx *ctx)
damon_for_each_target(t, ctx) {
if (damon_va_three_regions(t, three_regions))
continue;
- damon_set_regions(t, three_regions, 3);
+ damon_set_regions(t, three_regions, 3, DAMON_MIN_REGION);
}
}
--
2.43.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [RFC PATCH mm-next v2 12/12] mm/damon/core: prevent unnecessary overflow in damos_set_effective_quota()
2025-08-20 8:06 [RFC PATCH mm-next v2 00/12] mm/damon: support ARM32 with LPAE Quanmin Yan
` (10 preceding siblings ...)
2025-08-20 8:06 ` [RFC PATCH mm-next v2 11/12] mm/damon: add damon_ctx->min_region Quanmin Yan
@ 2025-08-20 8:06 ` Quanmin Yan
2025-08-20 22:16 ` SeongJae Park
2025-08-20 22:23 ` [RFC PATCH mm-next v2 00/12] mm/damon: support ARM32 with LPAE SeongJae Park
12 siblings, 1 reply; 18+ messages in thread
From: Quanmin Yan @ 2025-08-20 8:06 UTC (permalink / raw)
To: sj
Cc: akpm, damon, linux-kernel, linux-mm, yanquanmin1, wangkefeng.wang,
zuoze1
On 32-bit systems, the throughput calculation in function
damos_set_effective_quota() is prone to unnecessary
multiplication overflow. Using mult_frac() to fix it.
Signed-off-by: Quanmin Yan <yanquanmin1@huawei.com>
---
mm/damon/core.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/damon/core.c b/mm/damon/core.c
index 980e271e42e9..38b5f842ef30 100644
--- a/mm/damon/core.c
+++ b/mm/damon/core.c
@@ -2102,8 +2102,8 @@ static void damos_set_effective_quota(struct damos_quota *quota)
if (quota->ms) {
if (quota->total_charged_ns)
- throughput = quota->total_charged_sz * 1000000 /
- quota->total_charged_ns;
+ throughput = mult_frac(quota->total_charged_sz, 1000000,
+ quota->total_charged_ns);
else
throughput = PAGE_SIZE * 1024;
esz = min(throughput * quota->ms, esz);
--
2.43.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [RFC PATCH mm-next v2 10/12] Docs/ABI/damon: document addr_unit file
2025-08-20 8:06 ` [RFC PATCH mm-next v2 10/12] Docs/ABI/damon: " Quanmin Yan
@ 2025-08-20 21:37 ` SeongJae Park
0 siblings, 0 replies; 18+ messages in thread
From: SeongJae Park @ 2025-08-20 21:37 UTC (permalink / raw)
To: Quanmin Yan
Cc: SeongJae Park, akpm, damon, linux-kernel, linux-mm,
wangkefeng.wang, zuoze1
On Wed, 20 Aug 2025 16:06:20 +0800 Quanmin Yan <yanquanmin1@huawei.com> wrote:
> From: SeongJae Park <sj@kernel.org>
>
> Document addr_unit DAMON sysfs file on DAMON ABI document.
>
> Signed-off-by: SeongJae Park <sj@kernel.org>
> Signed-off-by: Quanmin Yan <yanquanmin1@huawei.com>
> ---
> Documentation/ABI/testing/sysfs-kernel-mm-damon | 7 +++++++
> 1 file changed, 7 insertions(+)
>
> diff --git a/Documentation/ABI/testing/sysfs-kernel-mm-damon b/Documentation/ABI/testing/sysfs-kernel-mm-damon
> index 6791d879759e..cf4d66bd119d 100644
> --- a/Documentation/ABI/testing/sysfs-kernel-mm-damon
> +++ b/Documentation/ABI/testing/sysfs-kernel-mm-damon
> @@ -77,6 +77,13 @@ Description: Writing a keyword for a monitoring operations set ('vaddr' for
> Note that only the operations sets that listed in
> 'avail_operations' file are valid inputs.
>
> +What: /sys/kernel/mm/damon/admin/kdamonds/<K>/contexts/<C>/addr_unit
> +Date: Apr 2025
Please update above date.
> +Contact: SeongJae Park <sj@kernel.org>
> +Description: Writing an integer to this file sets the 'address unit'
> + parameter of the given operations set of the context. Reading
> + the file returns the last-written 'address unit' value.
> +
> What: /sys/kernel/mm/damon/admin/kdamonds/<K>/contexts/<C>/monitoring_attrs/intervals/sample_us
> Date: Mar 2022
> Contact: SeongJae Park <sj@kernel.org>
> --
> 2.43.0
Thanks,
SJ
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [RFC PATCH mm-next v2 11/12] mm/damon: add damon_ctx->min_region
2025-08-20 8:06 ` [RFC PATCH mm-next v2 11/12] mm/damon: add damon_ctx->min_region Quanmin Yan
@ 2025-08-20 21:56 ` SeongJae Park
0 siblings, 0 replies; 18+ messages in thread
From: SeongJae Park @ 2025-08-20 21:56 UTC (permalink / raw)
To: Quanmin Yan
Cc: SeongJae Park, akpm, damon, linux-kernel, linux-mm,
wangkefeng.wang, zuoze1
On Wed, 20 Aug 2025 16:06:21 +0800 Quanmin Yan <yanquanmin1@huawei.com> wrote:
> Adopting addr_unit would make DAMON_MINREGION 'addr_unit * 4096'
> bytes and cause data alignment issues[1].
>
> Add damon_ctx->min_region to change DAMON_MIN_REGION from a global
> macro value to per-context variable.
>
> [1] https://lore.kernel.org/all/527714dd-0e33-43ab-bbbd-d89670ba79e7@huawei.com
>
> Signed-off-by: Quanmin Yan <yanquanmin1@huawei.com>
> ---
> include/linux/damon.h | 4 ++-
> mm/damon/core.c | 68 ++++++++++++++++++++----------------
> mm/damon/sysfs.c | 8 +++--
> mm/damon/tests/core-kunit.h | 16 ++++-----
> mm/damon/tests/vaddr-kunit.h | 2 +-
> mm/damon/vaddr.c | 2 +-
> 6 files changed, 56 insertions(+), 44 deletions(-)
>
> diff --git a/include/linux/damon.h b/include/linux/damon.h
> index 6fa52f7495d9..bebd791f37f1 100644
> --- a/include/linux/damon.h
> +++ b/include/linux/damon.h
> @@ -747,6 +747,7 @@ struct damon_attrs {
> *
> * @ops: Set of monitoring operations for given use cases.
> * @addr_unit: Scale factor for core to ops address conversion.
> + * @min_region: Minimum Region Size.
The name feels not very clear to me. What about min_sz_region? Also, the
description can be like more general sentence, e.g., 'Minimum region size.'
Apparently the name came from DAMON_MIN_REGION, which I named. Let's blame SJ
of 2021 :) Let's not change DAMON_MIN_REGION's name, though. Maybe such a
change is not really necessary for now.
I was thinking I gave a same comment to RFC v1, but I didn't. Seems I forgot
sending a draft. Sorry for late comment.
> * @adaptive_targets: Head of monitoring targets (&damon_target) list.
> * @schemes: Head of schemes (&damos) list.
> */
> @@ -789,6 +790,7 @@ struct damon_ctx {
>
> struct damon_operations ops;
> unsigned long addr_unit;
> + unsigned long min_region;
>
> struct list_head adaptive_targets;
> struct list_head schemes;
> @@ -877,7 +879,7 @@ static inline void damon_insert_region(struct damon_region *r,
> void damon_add_region(struct damon_region *r, struct damon_target *t);
> void damon_destroy_region(struct damon_region *r, struct damon_target *t);
> int damon_set_regions(struct damon_target *t, struct damon_addr_range *ranges,
> - unsigned int nr_ranges);
> + unsigned int nr_ranges, unsigned long min_region);
> void damon_update_region_access_rate(struct damon_region *r, bool accessed,
> struct damon_attrs *attrs);
>
> diff --git a/mm/damon/core.c b/mm/damon/core.c
> index 8f8aa84953ac..980e271e42e9 100644
> --- a/mm/damon/core.c
> +++ b/mm/damon/core.c
> @@ -208,7 +208,7 @@ static int damon_fill_regions_holes(struct damon_region *first,
> * Return: 0 if success, or negative error code otherwise.
> */
> int damon_set_regions(struct damon_target *t, struct damon_addr_range *ranges,
> - unsigned int nr_ranges)
> + unsigned int nr_ranges, unsigned long min_region)
> {
> struct damon_region *r, *next;
> unsigned int i;
> @@ -245,16 +245,16 @@ int damon_set_regions(struct damon_target *t, struct damon_addr_range *ranges,
> /* no region intersects with this range */
> newr = damon_new_region(
> ALIGN_DOWN(range->start,
> - DAMON_MIN_REGION),
> - ALIGN(range->end, DAMON_MIN_REGION));
> + min_region),
> + ALIGN(range->end, min_region));
> if (!newr)
> return -ENOMEM;
> damon_insert_region(newr, damon_prev_region(r), r, t);
> } else {
> /* resize intersecting regions to fit in this range */
> first->ar.start = ALIGN_DOWN(range->start,
> - DAMON_MIN_REGION);
> - last->ar.end = ALIGN(range->end, DAMON_MIN_REGION);
> + min_region);
> + last->ar.end = ALIGN(range->end, min_region);
>
> /* fill possible holes in the range */
> err = damon_fill_regions_holes(first, last, t);
> @@ -545,6 +545,7 @@ struct damon_ctx *damon_new_ctx(void)
> ctx->attrs.max_nr_regions = 1000;
>
> ctx->addr_unit = 1;
> + ctx->min_region = DAMON_MIN_REGION;
>
> INIT_LIST_HEAD(&ctx->adaptive_targets);
> INIT_LIST_HEAD(&ctx->schemes);
> @@ -1127,8 +1128,8 @@ static struct damon_target *damon_nth_target(int n, struct damon_ctx *ctx)
> *
> * If @src has no region, @dst keeps current regions.
> */
> -static int damon_commit_target_regions(
> - struct damon_target *dst, struct damon_target *src)
> +static int damon_commit_target_regions(struct damon_target *dst,
> + struct damon_target *src, unsigned long src_min_region)
> {
> struct damon_region *src_region;
> struct damon_addr_range *ranges;
> @@ -1145,18 +1146,19 @@ static int damon_commit_target_regions(
> i = 0;
> damon_for_each_region(src_region, src)
> ranges[i++] = src_region->ar;
> - err = damon_set_regions(dst, ranges, i);
> + err = damon_set_regions(dst, ranges, i, src_min_region);
> kfree(ranges);
> return err;
> }
>
> static int damon_commit_target(
> struct damon_target *dst, bool dst_has_pid,
> - struct damon_target *src, bool src_has_pid)
> + struct damon_target *src, bool src_has_pid,
> + unsigned long src_min_region)
> {
> int err;
>
> - err = damon_commit_target_regions(dst, src);
> + err = damon_commit_target_regions(dst, src, src_min_region);
> if (err)
> return err;
> if (dst_has_pid)
> @@ -1178,7 +1180,8 @@ static int damon_commit_targets(
> if (src_target) {
> err = damon_commit_target(
> dst_target, damon_target_has_pid(dst),
> - src_target, damon_target_has_pid(src));
> + src_target, damon_target_has_pid(src),
> + src->min_region);
> if (err)
> return err;
> } else {
> @@ -1201,7 +1204,8 @@ static int damon_commit_targets(
> if (!new_target)
> return -ENOMEM;
> err = damon_commit_target(new_target, false,
> - src_target, damon_target_has_pid(src));
> + src_target, damon_target_has_pid(src),
> + src->min_region);
> if (err) {
> damon_destroy_target(new_target, NULL);
> return err;
> @@ -1248,6 +1252,7 @@ int damon_commit_ctx(struct damon_ctx *dst, struct damon_ctx *src)
> }
> dst->ops = src->ops;
> dst->addr_unit = src->addr_unit;
> + dst->min_region = max(DAMON_MIN_REGION / dst->addr_unit, 1);
Can't we set this as src->min_region?
>
> return 0;
> }
> @@ -1280,8 +1285,8 @@ static unsigned long damon_region_sz_limit(struct damon_ctx *ctx)
>
> if (ctx->attrs.min_nr_regions)
> sz /= ctx->attrs.min_nr_regions;
> - if (sz < DAMON_MIN_REGION)
> - sz = DAMON_MIN_REGION;
> + if (sz < ctx->min_region)
> + sz = ctx->min_region;
>
> return sz;
> }
> @@ -1641,8 +1646,9 @@ static bool damos_valid_target(struct damon_ctx *c, struct damon_target *t,
> *
> * Return: true if the region should be entirely skipped, false otherwise.
> */
> -static bool damos_skip_charged_region(struct damon_target *t,
> - struct damon_region **rp, struct damos *s)
> +static bool damos_skip_charged_region(
The above line is not really required to be changed. Let's keep it.
> + struct damon_target *t, struct damon_region **rp,
> + struct damos *s, unsigned long min_region)
> {
> struct damon_region *r = *rp;
> struct damos_quota *quota = &s->quota;
> @@ -1664,11 +1670,11 @@ static bool damos_skip_charged_region(struct damon_target *t,
> if (quota->charge_addr_from && r->ar.start <
> quota->charge_addr_from) {
> sz_to_skip = ALIGN_DOWN(quota->charge_addr_from -
> - r->ar.start, DAMON_MIN_REGION);
> + r->ar.start, min_region);
> if (!sz_to_skip) {
> - if (damon_sz_region(r) <= DAMON_MIN_REGION)
> + if (damon_sz_region(r) <= min_region)
> return true;
> - sz_to_skip = DAMON_MIN_REGION;
> + sz_to_skip = min_region;
> }
> damon_split_region_at(t, r, sz_to_skip);
> r = damon_next_region(r);
> @@ -1693,7 +1699,8 @@ static void damos_update_stat(struct damos *s,
> }
>
> static bool damos_filter_match(struct damon_ctx *ctx, struct damon_target *t,
> - struct damon_region *r, struct damos_filter *filter)
> + struct damon_region *r, struct damos_filter *filter,
> + unsigned long min_region)
> {
> bool matched = false;
> struct damon_target *ti;
> @@ -1710,8 +1717,8 @@ static bool damos_filter_match(struct damon_ctx *ctx, struct damon_target *t,
> matched = target_idx == filter->target_idx;
> break;
> case DAMOS_FILTER_TYPE_ADDR:
> - start = ALIGN_DOWN(filter->addr_range.start, DAMON_MIN_REGION);
> - end = ALIGN_DOWN(filter->addr_range.end, DAMON_MIN_REGION);
> + start = ALIGN_DOWN(filter->addr_range.start, min_region);
> + end = ALIGN_DOWN(filter->addr_range.end, min_region);
>
> /* inside the range */
> if (start <= r->ar.start && r->ar.end <= end) {
> @@ -1747,7 +1754,7 @@ static bool damos_filter_out(struct damon_ctx *ctx, struct damon_target *t,
>
> s->core_filters_allowed = false;
> damos_for_each_filter(filter, s) {
> - if (damos_filter_match(ctx, t, r, filter)) {
> + if (damos_filter_match(ctx, t, r, filter, ctx->min_region)) {
> if (filter->allow)
> s->core_filters_allowed = true;
> return !filter->allow;
> @@ -1882,7 +1889,7 @@ static void damos_apply_scheme(struct damon_ctx *c, struct damon_target *t,
> if (c->ops.apply_scheme) {
> if (quota->esz && quota->charged_sz + sz > quota->esz) {
> sz = ALIGN_DOWN(quota->esz - quota->charged_sz,
> - DAMON_MIN_REGION);
> + c->min_region);
> if (!sz)
> goto update_stat;
> damon_split_region_at(t, r, sz);
> @@ -1930,7 +1937,7 @@ static void damon_do_apply_schemes(struct damon_ctx *c,
> if (quota->esz && quota->charged_sz >= quota->esz)
> continue;
>
> - if (damos_skip_charged_region(t, &r, s))
> + if (damos_skip_charged_region(t, &r, s, c->min_region))
> continue;
>
> if (!damos_valid_target(c, t, r, s))
> @@ -2324,7 +2331,8 @@ static void damon_split_region_at(struct damon_target *t,
> }
>
> /* Split every region in the given target into 'nr_subs' regions */
> -static void damon_split_regions_of(struct damon_target *t, int nr_subs)
> +static void damon_split_regions_of(struct damon_target *t,
> + int nr_subs, unsigned long min_region)
Let's keep having nr_subs on the upper line.
> {
> struct damon_region *r, *next;
> unsigned long sz_region, sz_sub = 0;
> @@ -2334,13 +2342,13 @@ static void damon_split_regions_of(struct damon_target *t, int nr_subs)
> sz_region = damon_sz_region(r);
>
> for (i = 0; i < nr_subs - 1 &&
> - sz_region > 2 * DAMON_MIN_REGION; i++) {
> + sz_region > 2 * min_region; i++) {
> /*
> * Randomly select size of left sub-region to be at
> * least 10 percent and at most 90% of original region
> */
> sz_sub = ALIGN_DOWN(damon_rand(1, 10) *
> - sz_region / 10, DAMON_MIN_REGION);
> + sz_region / 10, min_region);
> /* Do not allow blank region */
> if (sz_sub == 0 || sz_sub >= sz_region)
> continue;
> @@ -2380,7 +2388,7 @@ static void kdamond_split_regions(struct damon_ctx *ctx)
> nr_subregions = 3;
>
> damon_for_each_target(t, ctx)
> - damon_split_regions_of(t, nr_subregions);
> + damon_split_regions_of(t, nr_subregions, ctx->min_region);
>
> last_nr_regions = nr_regions;
> }
> @@ -2769,7 +2777,7 @@ int damon_set_region_biggest_system_ram_default(struct damon_target *t,
>
> addr_range.start = *start;
> addr_range.end = *end;
> - return damon_set_regions(t, &addr_range, 1);
> + return damon_set_regions(t, &addr_range, 1, DAMON_MIN_REGION);
> }
>
> /*
> diff --git a/mm/damon/sysfs.c b/mm/damon/sysfs.c
> index 98bf15d403b2..840b3a73147a 100644
> --- a/mm/damon/sysfs.c
> +++ b/mm/damon/sysfs.c
> @@ -1329,7 +1329,8 @@ static int damon_sysfs_set_attrs(struct damon_ctx *ctx,
> }
>
> static int damon_sysfs_set_regions(struct damon_target *t,
> - struct damon_sysfs_regions *sysfs_regions)
> + struct damon_sysfs_regions *sysfs_regions,
> + unsigned long min_region)
> {
> struct damon_addr_range *ranges = kmalloc_array(sysfs_regions->nr,
> sizeof(*ranges), GFP_KERNEL | __GFP_NOWARN);
> @@ -1351,7 +1352,7 @@ static int damon_sysfs_set_regions(struct damon_target *t,
> if (ranges[i - 1].end > ranges[i].start)
> goto out;
> }
> - err = damon_set_regions(t, ranges, sysfs_regions->nr);
> + err = damon_set_regions(t, ranges, sysfs_regions->nr, min_region);
> out:
> kfree(ranges);
> return err;
> @@ -1372,7 +1373,7 @@ static int damon_sysfs_add_target(struct damon_sysfs_target *sys_target,
> /* caller will destroy targets */
> return -EINVAL;
> }
> - return damon_sysfs_set_regions(t, sys_target->regions);
> + return damon_sysfs_set_regions(t, sys_target->regions, ctx->min_region);
> }
>
> static int damon_sysfs_add_targets(struct damon_ctx *ctx,
> @@ -1430,6 +1431,7 @@ static int damon_sysfs_apply_inputs(struct damon_ctx *ctx,
> if (err)
> return err;
> ctx->addr_unit = sys_ctx->addr_unit;
> + ctx->min_region = max(DAMON_MIN_REGION / ctx->addr_unit, 1);
> err = damon_sysfs_set_attrs(ctx, sys_ctx->attrs);
> if (err)
> return err;
> diff --git a/mm/damon/tests/core-kunit.h b/mm/damon/tests/core-kunit.h
> index 5f5dc9db2e90..a7fa078da405 100644
> --- a/mm/damon/tests/core-kunit.h
> +++ b/mm/damon/tests/core-kunit.h
> @@ -230,14 +230,14 @@ static void damon_test_split_regions_of(struct kunit *test)
> t = damon_new_target();
> r = damon_new_region(0, 22);
> damon_add_region(r, t);
> - damon_split_regions_of(t, 2);
> + damon_split_regions_of(t, 2, DAMON_MIN_REGION);
> KUNIT_EXPECT_LE(test, damon_nr_regions(t), 2u);
> damon_free_target(t);
>
> t = damon_new_target();
> r = damon_new_region(0, 220);
> damon_add_region(r, t);
> - damon_split_regions_of(t, 4);
> + damon_split_regions_of(t, 4, DAMON_MIN_REGION);
> KUNIT_EXPECT_LE(test, damon_nr_regions(t), 4u);
> damon_free_target(t);
> damon_destroy_ctx(c);
> @@ -303,7 +303,7 @@ static void damon_test_set_regions(struct kunit *test)
>
> damon_add_region(r1, t);
> damon_add_region(r2, t);
> - damon_set_regions(t, &range, 1);
> + damon_set_regions(t, &range, 1, DAMON_MIN_REGION);
>
> KUNIT_EXPECT_EQ(test, damon_nr_regions(t), 3);
> damon_for_each_region(r, t) {
> @@ -450,25 +450,25 @@ static void damos_test_filter_out(struct kunit *test)
> damon_add_region(r, t);
>
> /* region in the range */
> - KUNIT_EXPECT_TRUE(test, damos_filter_match(NULL, t, r, f));
> + KUNIT_EXPECT_TRUE(test, damos_filter_match(NULL, t, r, f, DAMON_MIN_REGION));
Let's break lines longer than 80 columns. Same for below four changes.
[1] https://docs.kernel.org/process/coding-style.html#breaking-long-lines-and-strings
> KUNIT_EXPECT_EQ(test, damon_nr_regions(t), 1);
>
> /* region before the range */
> r->ar.start = DAMON_MIN_REGION * 1;
> r->ar.end = DAMON_MIN_REGION * 2;
> - KUNIT_EXPECT_FALSE(test, damos_filter_match(NULL, t, r, f));
> + KUNIT_EXPECT_FALSE(test, damos_filter_match(NULL, t, r, f, DAMON_MIN_REGION));
> KUNIT_EXPECT_EQ(test, damon_nr_regions(t), 1);
>
> /* region after the range */
> r->ar.start = DAMON_MIN_REGION * 6;
> r->ar.end = DAMON_MIN_REGION * 8;
> - KUNIT_EXPECT_FALSE(test, damos_filter_match(NULL, t, r, f));
> + KUNIT_EXPECT_FALSE(test, damos_filter_match(NULL, t, r, f, DAMON_MIN_REGION));
> KUNIT_EXPECT_EQ(test, damon_nr_regions(t), 1);
>
> /* region started before the range */
> r->ar.start = DAMON_MIN_REGION * 1;
> r->ar.end = DAMON_MIN_REGION * 4;
> - KUNIT_EXPECT_FALSE(test, damos_filter_match(NULL, t, r, f));
> + KUNIT_EXPECT_FALSE(test, damos_filter_match(NULL, t, r, f, DAMON_MIN_REGION));
> /* filter should have split the region */
> KUNIT_EXPECT_EQ(test, r->ar.start, DAMON_MIN_REGION * 1);
> KUNIT_EXPECT_EQ(test, r->ar.end, DAMON_MIN_REGION * 2);
> @@ -481,7 +481,7 @@ static void damos_test_filter_out(struct kunit *test)
> /* region started in the range */
> r->ar.start = DAMON_MIN_REGION * 2;
> r->ar.end = DAMON_MIN_REGION * 8;
> - KUNIT_EXPECT_TRUE(test, damos_filter_match(NULL, t, r, f));
> + KUNIT_EXPECT_TRUE(test, damos_filter_match(NULL, t, r, f, DAMON_MIN_REGION));
> /* filter should have split the region */
> KUNIT_EXPECT_EQ(test, r->ar.start, DAMON_MIN_REGION * 2);
> KUNIT_EXPECT_EQ(test, r->ar.end, DAMON_MIN_REGION * 6);
> diff --git a/mm/damon/tests/vaddr-kunit.h b/mm/damon/tests/vaddr-kunit.h
> index d2b37ccf2cc0..fce38dd53cf8 100644
> --- a/mm/damon/tests/vaddr-kunit.h
> +++ b/mm/damon/tests/vaddr-kunit.h
> @@ -141,7 +141,7 @@ static void damon_do_test_apply_three_regions(struct kunit *test,
> damon_add_region(r, t);
> }
>
> - damon_set_regions(t, three_regions, 3);
> + damon_set_regions(t, three_regions, 3, DAMON_MIN_REGION);
>
> for (i = 0; i < nr_expected / 2; i++) {
> r = __nth_region_of(t, i);
> diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c
> index 66ef9869eafe..8c048f9b129e 100644
> --- a/mm/damon/vaddr.c
> +++ b/mm/damon/vaddr.c
> @@ -299,7 +299,7 @@ static void damon_va_update(struct damon_ctx *ctx)
> damon_for_each_target(t, ctx) {
> if (damon_va_three_regions(t, three_regions))
> continue;
> - damon_set_regions(t, three_regions, 3);
> + damon_set_regions(t, three_regions, 3, DAMON_MIN_REGION);
> }
> }
>
> --
> 2.43.0
Other than the abovely added comments, looks good to me, overall.
Thanks,
SJ
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [RFC PATCH mm-next v2 12/12] mm/damon/core: prevent unnecessary overflow in damos_set_effective_quota()
2025-08-20 8:06 ` [RFC PATCH mm-next v2 12/12] mm/damon/core: prevent unnecessary overflow in damos_set_effective_quota() Quanmin Yan
@ 2025-08-20 22:16 ` SeongJae Park
0 siblings, 0 replies; 18+ messages in thread
From: SeongJae Park @ 2025-08-20 22:16 UTC (permalink / raw)
To: Quanmin Yan
Cc: SeongJae Park, akpm, damon, linux-kernel, linux-mm,
wangkefeng.wang, zuoze1, Andrew Paniakin
+ Andrew
On Wed, 20 Aug 2025 16:06:22 +0800 Quanmin Yan <yanquanmin1@huawei.com> wrote:
> On 32-bit systems, the throughput calculation in function
> damos_set_effective_quota() is prone to unnecessary
> multiplication overflow. Using mult_frac() to fix it.
Andrew Paniakin also recently found and privately reported this issue, on 64
bit systems. This can also happen on 64-bit systems, once the charged size
exceeds ~17 TiB. On systems running for long time in production, this issue
can actually happen.
More specifically, when a DAMOS scheme having the time quota runs for long
time, throughput calculation can overflow and set esz too small. As a result,
speed of the scheme get unexpectedly slow.
Quanmin, could you please add the above paragraph on the commit message? Also,
I think this fix deserves Cc: stable@ and has no reason to be a part of this
patch series. Could you please add appripriate tags like below and post again
separately?
Fixes: 1cd243030059 ("mm/damon/schemes: implement time quota")
Reported-by: Andrew Paniakin <apanyaki@amazon.com>
Closes: N/A # privately reported
Cc: <stable@vger.kernel.org> # 5.16.x
>
> Signed-off-by: Quanmin Yan <yanquanmin1@huawei.com>
> ---
> mm/damon/core.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/mm/damon/core.c b/mm/damon/core.c
> index 980e271e42e9..38b5f842ef30 100644
> --- a/mm/damon/core.c
> +++ b/mm/damon/core.c
> @@ -2102,8 +2102,8 @@ static void damos_set_effective_quota(struct damos_quota *quota)
>
> if (quota->ms) {
> if (quota->total_charged_ns)
> - throughput = quota->total_charged_sz * 1000000 /
> - quota->total_charged_ns;
> + throughput = mult_frac(quota->total_charged_sz, 1000000,
> + quota->total_charged_ns);
> else
> throughput = PAGE_SIZE * 1024;
> esz = min(throughput * quota->ms, esz);
> --
> 2.43.0
Thanks,
SJ
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [RFC PATCH mm-next v2 00/12] mm/damon: support ARM32 with LPAE
2025-08-20 8:06 [RFC PATCH mm-next v2 00/12] mm/damon: support ARM32 with LPAE Quanmin Yan
` (11 preceding siblings ...)
2025-08-20 8:06 ` [RFC PATCH mm-next v2 12/12] mm/damon/core: prevent unnecessary overflow in damos_set_effective_quota() Quanmin Yan
@ 2025-08-20 22:23 ` SeongJae Park
2025-08-21 11:19 ` Quanmin Yan
12 siblings, 1 reply; 18+ messages in thread
From: SeongJae Park @ 2025-08-20 22:23 UTC (permalink / raw)
To: Quanmin Yan
Cc: SeongJae Park, akpm, damon, linux-kernel, linux-mm,
wangkefeng.wang, zuoze1
On Wed, 20 Aug 2025 16:06:10 +0800 Quanmin Yan <yanquanmin1@huawei.com> wrote:
> Previously, DAMON's physical address space monitoring only supported
> memory ranges below 4GB on LPAE-enabled systems. This was due to
> the use of 'unsigned long' in 'struct damon_addr_range', which is
> 32-bit on ARM32 even with LPAE enabled[1].
>
> To add DAMON support for ARM32 with LPAE enabled, a new core layer
> parameter called 'addr_unit' was introduced[2]. Operations set layer
> can translate a core layer address to the real address by multiplying
> the parameter value to the core layer address. Support of the parameter
> is up to each operations layer implementation, though. For example,
> operations set implementations for virtual address space can simply
> ignore the parameter. Add the support on paddr, which is the DAMON
> operations set implementation for the physical address space, as we have
> a clear use case for that.
>
> [1]https://lore.kernel.org/all/20250408075553.959388-1-zuoze1@huawei.com/
> [2]https://lore.kernel.org/all/20250416042551.158131-1-sj@kernel.org/
>
> Changes in v2:
It would be nice if you can also add the link to the previous version, e.g.,
like the revisions history of
https://lore.kernel.org/20250819193404.46680-1-sj@kernel.org
> - set DAMOS_PAGEOUT, DAMOS_LRU_[DE]PRIO, DAMOS_MIGRATE_{HOT,COLD} and
> DAMOS_STAT stat in core address unit.
> - pass ctx->min_region value to replace the original synchronization.
> - drop the DAMOS stats type changes, keep them as 'unsigned long' type.
> - separate add addr_unit support for DAMON_RECLAIM and LRU_SORT from
> this patch series.
Thank you for continuing this work!
>
> Quanmin Yan (2):
> mm/damon: add damon_ctx->min_region
> mm/damon/core: prevent unnecessary overflow in
> damos_set_effective_quota()
I left a few comments. In essense, let's rename min_region to min_sz_region,
and separate the last fix from this series.
Other than above, looks good overall. I think you can drop RFC tag from the
next version.
Thanks,
SJ
[...]
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [RFC PATCH mm-next v2 00/12] mm/damon: support ARM32 with LPAE
2025-08-20 22:23 ` [RFC PATCH mm-next v2 00/12] mm/damon: support ARM32 with LPAE SeongJae Park
@ 2025-08-21 11:19 ` Quanmin Yan
0 siblings, 0 replies; 18+ messages in thread
From: Quanmin Yan @ 2025-08-21 11:19 UTC (permalink / raw)
To: SeongJae Park
Cc: akpm, damon, linux-kernel, linux-mm, wangkefeng.wang, zuoze1
Hi SJ,
在 2025/8/21 6:23, SeongJae Park 写道:
> On Wed, 20 Aug 2025 16:06:10 +0800 Quanmin Yan <yanquanmin1@huawei.com> wrote:
>
>> Previously, DAMON's physical address space monitoring only supported
>> memory ranges below 4GB on LPAE-enabled systems. This was due to
>> the use of 'unsigned long' in 'struct damon_addr_range', which is
>> 32-bit on ARM32 even with LPAE enabled[1].
>>
>> To add DAMON support for ARM32 with LPAE enabled, a new core layer
>> parameter called 'addr_unit' was introduced[2]. Operations set layer
>> can translate a core layer address to the real address by multiplying
>> the parameter value to the core layer address. Support of the parameter
>> is up to each operations layer implementation, though. For example,
>> operations set implementations for virtual address space can simply
>> ignore the parameter. Add the support on paddr, which is the DAMON
>> operations set implementation for the physical address space, as we have
>> a clear use case for that.
>>
>> [1]https://lore.kernel.org/all/20250408075553.959388-1-zuoze1@huawei.com/
>> [2]https://lore.kernel.org/all/20250416042551.158131-1-sj@kernel.org/
>>
>> Changes in v2:
> It would be nice if you can also add the link to the previous version, e.g.,
> like the revisions history of
> https://lore.kernel.org/20250819193404.46680-1-sj@kernel.org
>
>> - set DAMOS_PAGEOUT, DAMOS_LRU_[DE]PRIO, DAMOS_MIGRATE_{HOT,COLD} and
>> DAMOS_STAT stat in core address unit.
>> - pass ctx->min_region value to replace the original synchronization.
>> - drop the DAMOS stats type changes, keep them as 'unsigned long' type.
>> - separate add addr_unit support for DAMON_RECLAIM and LRU_SORT from
>> this patch series.
> Thank you for continuing this work!
>
>> Quanmin Yan (2):
>> mm/damon: add damon_ctx->min_region
>> mm/damon/core: prevent unnecessary overflow in
>> damos_set_effective_quota()
> I left a few comments. In essense, let's rename min_region to min_sz_region,
> and separate the last fix from this series.
>
> Other than above, looks good overall. I think you can drop RFC tag from the
> next version.
>
>
> Thanks,
> SJ
>
Thank you for your guidance on my work. I have published a new patch series,
please review it at [1].
[1] https://lore.kernel.org/all/20250821105159.2503894-1-yanquanmin1@huawei.com/
Best regards,
Quanmin Yan
^ permalink raw reply [flat|nested] 18+ messages in thread
end of thread, other threads:[~2025-08-21 11:20 UTC | newest]
Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-20 8:06 [RFC PATCH mm-next v2 00/12] mm/damon: support ARM32 with LPAE Quanmin Yan
2025-08-20 8:06 ` [RFC PATCH mm-next v2 01/12] mm/damon/core: add damon_ctx->addr_unit Quanmin Yan
2025-08-20 8:06 ` [RFC PATCH mm-next v2 02/12] mm/damon/paddr: support addr_unit for access monitoring Quanmin Yan
2025-08-20 8:06 ` [RFC PATCH mm-next v2 03/12] mm/damon/paddr: support addr_unit for DAMOS_PAGEOUT Quanmin Yan
2025-08-20 8:06 ` [RFC PATCH mm-next v2 04/12] mm/damon/paddr: support addr_unit for DAMOS_LRU_[DE]PRIO Quanmin Yan
2025-08-20 8:06 ` [RFC PATCH mm-next v2 05/12] mm/damon/paddr: support addr_unit for MIGRATE_{HOT,COLD} Quanmin Yan
2025-08-20 8:06 ` [RFC PATCH mm-next v2 06/12] mm/damon/paddr: support addr_unit for DAMOS_STAT Quanmin Yan
2025-08-20 8:06 ` [RFC PATCH mm-next v2 07/12] mm/damon/sysfs: implement addr_unit file under context dir Quanmin Yan
2025-08-20 8:06 ` [RFC PATCH mm-next v2 08/12] Docs/mm/damon/design: document 'address unit' parameter Quanmin Yan
2025-08-20 8:06 ` [RFC PATCH mm-next v2 09/12] Docs/admin-guide/mm/damon/usage: document addr_unit file Quanmin Yan
2025-08-20 8:06 ` [RFC PATCH mm-next v2 10/12] Docs/ABI/damon: " Quanmin Yan
2025-08-20 21:37 ` SeongJae Park
2025-08-20 8:06 ` [RFC PATCH mm-next v2 11/12] mm/damon: add damon_ctx->min_region Quanmin Yan
2025-08-20 21:56 ` SeongJae Park
2025-08-20 8:06 ` [RFC PATCH mm-next v2 12/12] mm/damon/core: prevent unnecessary overflow in damos_set_effective_quota() Quanmin Yan
2025-08-20 22:16 ` SeongJae Park
2025-08-20 22:23 ` [RFC PATCH mm-next v2 00/12] mm/damon: support ARM32 with LPAE SeongJae Park
2025-08-21 11:19 ` Quanmin Yan
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).