* [PATCH 00/14] mm/damon: remove damon_callback
@ 2025-07-12 19:50 SeongJae Park
2025-07-12 19:50 ` [PATCH 01/14] mm/damon: accept parallel damon_call() requests SeongJae Park
` (13 more replies)
0 siblings, 14 replies; 15+ messages in thread
From: SeongJae Park @ 2025-07-12 19:50 UTC (permalink / raw)
To: Andrew Morton; +Cc: SeongJae Park, damon, kernel-team, linux-kernel, linux-mm
damon_callback was the only way for communicating with DAMON for
contexts running on its worker thread. The interface is flexible
and simple. But as DAMON evolves with more features, damon_callback
has become somewhat too old. With runtime parameters update, for example,
its lack of synchronization support was found to be inconvenient.
Arguably it is also not easy to use correctly since the callers should
understand when each callback is called, and implication of the return
values from the callbacks.
To replace it, damon_call() and damos_walk() are introduced. And those
replaced a few damon_callback use cases. Some use cases of
damon_callback such as parallel or repetitive DAMON internal data
reading and additional cleanups cannot simply be replaced by
damon_call() and damos_walk(), though.
To allow those replaceable, extend damon_call() for parallel and/or
repeated callbacks and modify the core/ops layers for additional
resources cleanup. With the updates, replace the remaining
damon_callback usages and finally say goodbye to damon_callback.
Changes from RFC
(https://lore.kernel.org/20250706200018.42704-1-sj@kernel.org)
- Fix wrong return from loop that prevents repeat mode damon_call().
- Remove unnecessary put_pid() from DAMON sample modules.
- Rebase to latest mm-new
SeongJae Park (14):
mm/damon: accept parallel damon_call() requests
mm/damon/core: introduce repeat mode damon_call()
mm/damon/stat: use damon_call() repeat mode instead of damon_callback
mm/damon/reclaim: use damon_call() repeat mode instead of
damon_callback
mm/damon/lru_sort: use damon_call() repeat mode instead of
damon_callback
samples/damon/prcl: use damon_call() repeat mode instead of
damon_callback
samples/damon/wsse: use damon_call() repeat mode instead of
damon_callback
mm/damon/core: do not call ops.cleanup() when destroying targets
mm/damon/core: add cleanup_target() ops callback
mm/damon/vaddr: put pid in cleanup_target()
mm/damon/sysfs: remove damon_sysfs_destroy_targets()
mm/damon/core: destroy targets when kdamond_fn() finish
mm/damon/sysfs: remove damon_sysfs_before_terminate()
mm/damon/core: remove damon_callback
include/linux/damon.h | 44 ++++---------
mm/damon/core.c | 116 ++++++++++++++++++-----------------
mm/damon/lru_sort.c | 70 ++++++++++-----------
mm/damon/reclaim.c | 62 +++++++++----------
mm/damon/stat.c | 17 ++++-
mm/damon/sysfs.c | 41 +------------
mm/damon/tests/core-kunit.h | 4 +-
mm/damon/tests/vaddr-kunit.h | 2 +-
mm/damon/vaddr.c | 6 ++
samples/damon/prcl.c | 20 ++++--
samples/damon/wsse.c | 18 ++++--
11 files changed, 189 insertions(+), 211 deletions(-)
base-commit: 2c92772b92095b79ca713f215b192b555e41f72e
--
2.39.5
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 01/14] mm/damon: accept parallel damon_call() requests
2025-07-12 19:50 [PATCH 00/14] mm/damon: remove damon_callback SeongJae Park
@ 2025-07-12 19:50 ` SeongJae Park
2025-07-12 19:50 ` [PATCH 02/14] mm/damon/core: introduce repeat mode damon_call() SeongJae Park
` (12 subsequent siblings)
13 siblings, 0 replies; 15+ messages in thread
From: SeongJae Park @ 2025-07-12 19:50 UTC (permalink / raw)
To: Andrew Morton; +Cc: SeongJae Park, damon, kernel-team, linux-kernel, linux-mm
Calling damon_call() while it is serving for another parallel thread
immediately fails with -EBUSY. The caller should call it again, later.
Each caller implementing such retry logic would be redundant. Accept
parallel damon_call() requests and do the wait instead of the caller.
Signed-off-by: SeongJae Park <sj@kernel.org>
---
include/linux/damon.h | 7 +++++--
mm/damon/core.c | 49 ++++++++++++++++++++++---------------------
2 files changed, 30 insertions(+), 26 deletions(-)
diff --git a/include/linux/damon.h b/include/linux/damon.h
index 1f425d830bb9..562c7876ba88 100644
--- a/include/linux/damon.h
+++ b/include/linux/damon.h
@@ -673,6 +673,8 @@ struct damon_call_control {
struct completion completion;
/* informs if the kdamond canceled @fn infocation */
bool canceled;
+ /* List head for siblings. */
+ struct list_head list;
};
/**
@@ -798,8 +800,9 @@ struct damon_ctx {
/* for scheme quotas prioritization */
unsigned long *regions_score_histogram;
- struct damon_call_control *call_control;
- struct mutex call_control_lock;
+ /* lists of &struct damon_call_control */
+ struct list_head call_controls;
+ struct mutex call_controls_lock;
struct damos_walk_control *walk_control;
struct mutex walk_control_lock;
diff --git a/mm/damon/core.c b/mm/damon/core.c
index 2714a7a023db..b0a0b98f6889 100644
--- a/mm/damon/core.c
+++ b/mm/damon/core.c
@@ -533,7 +533,8 @@ struct damon_ctx *damon_new_ctx(void)
ctx->next_ops_update_sis = 0;
mutex_init(&ctx->kdamond_lock);
- mutex_init(&ctx->call_control_lock);
+ INIT_LIST_HEAD(&ctx->call_controls);
+ mutex_init(&ctx->call_controls_lock);
mutex_init(&ctx->walk_control_lock);
ctx->attrs.min_nr_regions = 10;
@@ -1393,14 +1394,11 @@ int damon_call(struct damon_ctx *ctx, struct damon_call_control *control)
{
init_completion(&control->completion);
control->canceled = false;
+ INIT_LIST_HEAD(&control->list);
- mutex_lock(&ctx->call_control_lock);
- if (ctx->call_control) {
- mutex_unlock(&ctx->call_control_lock);
- return -EBUSY;
- }
- ctx->call_control = control;
- mutex_unlock(&ctx->call_control_lock);
+ mutex_lock(&ctx->call_controls_lock);
+ list_add_tail(&ctx->call_controls, &control->list);
+ mutex_unlock(&ctx->call_controls_lock);
if (!damon_is_running(ctx))
return -EINVAL;
wait_for_completion(&control->completion);
@@ -2419,11 +2417,11 @@ static void kdamond_usleep(unsigned long usecs)
}
/*
- * kdamond_call() - handle damon_call_control.
+ * kdamond_call() - handle damon_call_control objects.
* @ctx: The &struct damon_ctx of the kdamond.
* @cancel: Whether to cancel the invocation of the function.
*
- * If there is a &struct damon_call_control request that registered via
+ * If there are &struct damon_call_control requests that registered via
* &damon_call() on @ctx, do or cancel the invocation of the function depending
* on @cancel. @cancel is set when the kdamond is already out of the main loop
* and therefore will be terminated.
@@ -2433,21 +2431,24 @@ static void kdamond_call(struct damon_ctx *ctx, bool cancel)
struct damon_call_control *control;
int ret = 0;
- mutex_lock(&ctx->call_control_lock);
- control = ctx->call_control;
- mutex_unlock(&ctx->call_control_lock);
- if (!control)
- return;
- if (cancel) {
- control->canceled = true;
- } else {
- ret = control->fn(control->data);
- control->return_code = ret;
+ while (true) {
+ mutex_lock(&ctx->call_controls_lock);
+ control = list_first_entry_or_null(&ctx->call_controls,
+ struct damon_call_control, list);
+ mutex_unlock(&ctx->call_controls_lock);
+ if (!control)
+ return;
+ if (cancel) {
+ control->canceled = true;
+ } else {
+ ret = control->fn(control->data);
+ control->return_code = ret;
+ }
+ mutex_lock(&ctx->call_controls_lock);
+ list_del(&control->list);
+ mutex_unlock(&ctx->call_controls_lock);
+ complete(&control->completion);
}
- complete(&control->completion);
- mutex_lock(&ctx->call_control_lock);
- ctx->call_control = NULL;
- mutex_unlock(&ctx->call_control_lock);
}
/* Returns negative error code if it's not activated but should return */
--
2.39.5
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 02/14] mm/damon/core: introduce repeat mode damon_call()
2025-07-12 19:50 [PATCH 00/14] mm/damon: remove damon_callback SeongJae Park
2025-07-12 19:50 ` [PATCH 01/14] mm/damon: accept parallel damon_call() requests SeongJae Park
@ 2025-07-12 19:50 ` SeongJae Park
2025-07-12 19:50 ` [PATCH 03/14] mm/damon/stat: use damon_call() repeat mode instead of damon_callback SeongJae Park
` (11 subsequent siblings)
13 siblings, 0 replies; 15+ messages in thread
From: SeongJae Park @ 2025-07-12 19:50 UTC (permalink / raw)
To: Andrew Morton; +Cc: SeongJae Park, damon, kernel-team, linux-kernel, linux-mm
damon_call() can be useful for reading or writing DAMON internal data
for one time. A common pattern of DAMON core usage from DAMON modules
is doing such reads and writes repeatedly, for example, to periodically
update the DAMOS stats. To do that with damon_call(), callers should
call damon_call() repeatedly, with their own delay loop. Each caller
doing that is repetitive. Introduce a repeat mode damon_call().
Callers can use the mode by setting a new field in damon_call_control.
If the mode is turned on, damon_call() returns success immediately, and
DAMON repeats invoking the callback function inside the kdamond main
loop.
Signed-off-by: SeongJae Park <sj@kernel.org>
---
include/linux/damon.h | 2 ++
mm/damon/core.c | 25 ++++++++++++++++++++-----
2 files changed, 22 insertions(+), 5 deletions(-)
diff --git a/include/linux/damon.h b/include/linux/damon.h
index 562c7876ba88..b83987275ff9 100644
--- a/include/linux/damon.h
+++ b/include/linux/damon.h
@@ -659,6 +659,7 @@ struct damon_callback {
*
* @fn: Function to be called back.
* @data: Data that will be passed to @fn.
+ * @repeat: Repeat invocations.
* @return_code: Return code from @fn invocation.
*
* Control damon_call(), which requests specific kdamond to invoke a given
@@ -667,6 +668,7 @@ struct damon_callback {
struct damon_call_control {
int (*fn)(void *data);
void *data;
+ bool repeat;
int return_code;
/* private: internal use only */
/* informs if the kdamond finished handling of the request */
diff --git a/mm/damon/core.c b/mm/damon/core.c
index b0a0b98f6889..ffb87497dbb5 100644
--- a/mm/damon/core.c
+++ b/mm/damon/core.c
@@ -1379,8 +1379,9 @@ bool damon_is_running(struct damon_ctx *ctx)
*
* Ask DAMON worker thread (kdamond) of @ctx to call a function with an
* argument data that respectively passed via &damon_call_control->fn and
- * &damon_call_control->data of @control, and wait until the kdamond finishes
- * handling of the request.
+ * &damon_call_control->data of @control. If &damon_call_control->repeat of
+ * @control is set, further wait until the kdamond finishes handling of the
+ * request. Otherwise, return as soon as the request is made.
*
* The kdamond executes the function with the argument in the main loop, just
* after a sampling of the iteration is finished. The function can hence
@@ -1392,7 +1393,8 @@ bool damon_is_running(struct damon_ctx *ctx)
*/
int damon_call(struct damon_ctx *ctx, struct damon_call_control *control)
{
- init_completion(&control->completion);
+ if (!control->repeat)
+ init_completion(&control->completion);
control->canceled = false;
INIT_LIST_HEAD(&control->list);
@@ -1401,6 +1403,8 @@ int damon_call(struct damon_ctx *ctx, struct damon_call_control *control)
mutex_unlock(&ctx->call_controls_lock);
if (!damon_is_running(ctx))
return -EINVAL;
+ if (control->repeat)
+ return 0;
wait_for_completion(&control->completion);
if (control->canceled)
return -ECANCELED;
@@ -2429,6 +2433,7 @@ static void kdamond_usleep(unsigned long usecs)
static void kdamond_call(struct damon_ctx *ctx, bool cancel)
{
struct damon_call_control *control;
+ LIST_HEAD(repeat_controls);
int ret = 0;
while (true) {
@@ -2437,7 +2442,7 @@ static void kdamond_call(struct damon_ctx *ctx, bool cancel)
struct damon_call_control, list);
mutex_unlock(&ctx->call_controls_lock);
if (!control)
- return;
+ break;
if (cancel) {
control->canceled = true;
} else {
@@ -2447,8 +2452,18 @@ static void kdamond_call(struct damon_ctx *ctx, bool cancel)
mutex_lock(&ctx->call_controls_lock);
list_del(&control->list);
mutex_unlock(&ctx->call_controls_lock);
- complete(&control->completion);
+ if (!control->repeat)
+ complete(&control->completion);
+ else
+ list_add(&control->list, &repeat_controls);
}
+ control = list_first_entry_or_null(&repeat_controls,
+ struct damon_call_control, list);
+ if (!control || cancel)
+ return;
+ mutex_lock(&ctx->call_controls_lock);
+ list_add_tail(&control->list, &ctx->call_controls);
+ mutex_unlock(&ctx->call_controls_lock);
}
/* Returns negative error code if it's not activated but should return */
--
2.39.5
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 03/14] mm/damon/stat: use damon_call() repeat mode instead of damon_callback
2025-07-12 19:50 [PATCH 00/14] mm/damon: remove damon_callback SeongJae Park
2025-07-12 19:50 ` [PATCH 01/14] mm/damon: accept parallel damon_call() requests SeongJae Park
2025-07-12 19:50 ` [PATCH 02/14] mm/damon/core: introduce repeat mode damon_call() SeongJae Park
@ 2025-07-12 19:50 ` SeongJae Park
2025-07-12 19:50 ` [PATCH 04/14] mm/damon/reclaim: " SeongJae Park
` (10 subsequent siblings)
13 siblings, 0 replies; 15+ messages in thread
From: SeongJae Park @ 2025-07-12 19:50 UTC (permalink / raw)
To: Andrew Morton; +Cc: SeongJae Park, damon, kernel-team, linux-kernel, linux-mm
DAMON_STAT uses damon_callback for periodically reading DAMON internal
data. Use its alternative, damon_call() repeat mode.
Signed-off-by: SeongJae Park <sj@kernel.org>
---
mm/damon/stat.c | 17 ++++++++++++++---
1 file changed, 14 insertions(+), 3 deletions(-)
diff --git a/mm/damon/stat.c b/mm/damon/stat.c
index b75af871627e..87bcd8866d4b 100644
--- a/mm/damon/stat.c
+++ b/mm/damon/stat.c
@@ -122,8 +122,9 @@ static void damon_stat_set_idletime_percentiles(struct damon_ctx *c)
kfree(sorted_regions);
}
-static int damon_stat_after_aggregation(struct damon_ctx *c)
+static int damon_stat_damon_call_fn(void *data)
{
+ struct damon_ctx *c = data;
static unsigned long last_refresh_jiffies;
/* avoid unnecessarily frequent stat update */
@@ -182,19 +183,29 @@ static struct damon_ctx *damon_stat_build_ctx(void)
damon_add_target(ctx, target);
if (damon_set_region_biggest_system_ram_default(target, &start, &end))
goto free_out;
- ctx->callback.after_aggregation = damon_stat_after_aggregation;
return ctx;
free_out:
damon_destroy_ctx(ctx);
return NULL;
}
+static struct damon_call_control call_control = {
+ .fn = damon_stat_damon_call_fn,
+ .repeat = true,
+};
+
static int damon_stat_start(void)
{
+ int err;
+
damon_stat_context = damon_stat_build_ctx();
if (!damon_stat_context)
return -ENOMEM;
- return damon_start(&damon_stat_context, 1, true);
+ err = damon_start(&damon_stat_context, 1, true);
+ if (err)
+ return err;
+ call_control.data = damon_stat_context;
+ return damon_call(damon_stat_context, &call_control);
}
static void damon_stat_stop(void)
--
2.39.5
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 04/14] mm/damon/reclaim: use damon_call() repeat mode instead of damon_callback
2025-07-12 19:50 [PATCH 00/14] mm/damon: remove damon_callback SeongJae Park
` (2 preceding siblings ...)
2025-07-12 19:50 ` [PATCH 03/14] mm/damon/stat: use damon_call() repeat mode instead of damon_callback SeongJae Park
@ 2025-07-12 19:50 ` SeongJae Park
2025-07-12 19:50 ` [PATCH 05/14] mm/damon/lru_sort: " SeongJae Park
` (9 subsequent siblings)
13 siblings, 0 replies; 15+ messages in thread
From: SeongJae Park @ 2025-07-12 19:50 UTC (permalink / raw)
To: Andrew Morton; +Cc: SeongJae Park, damon, kernel-team, linux-kernel, linux-mm
DAMON_RECLAIM uses damon_callback for periodically reading and writing
DAMON internal data and parameters. Use its alternative, damon_call()
repeat mode.
Signed-off-by: SeongJae Park <sj@kernel.org>
---
mm/damon/reclaim.c | 62 +++++++++++++++++++++++-----------------------
1 file changed, 31 insertions(+), 31 deletions(-)
diff --git a/mm/damon/reclaim.c b/mm/damon/reclaim.c
index 0fe8996328b8..3c71b4596676 100644
--- a/mm/damon/reclaim.c
+++ b/mm/damon/reclaim.c
@@ -238,6 +238,35 @@ static int damon_reclaim_apply_parameters(void)
return err;
}
+static int damon_reclaim_handle_commit_inputs(void)
+{
+ int err;
+
+ if (!commit_inputs)
+ return 0;
+
+ err = damon_reclaim_apply_parameters();
+ commit_inputs = false;
+ return err;
+}
+
+static int damon_reclaim_damon_call_fn(void *arg)
+{
+ struct damon_ctx *c = arg;
+ struct damos *s;
+
+ /* update the stats parameter */
+ damon_for_each_scheme(s, c)
+ damon_reclaim_stat = s->stat;
+
+ return damon_reclaim_handle_commit_inputs();
+}
+
+static struct damon_call_control call_control = {
+ .fn = damon_reclaim_damon_call_fn,
+ .repeat = true,
+};
+
static int damon_reclaim_turn(bool on)
{
int err;
@@ -257,7 +286,7 @@ static int damon_reclaim_turn(bool on)
if (err)
return err;
kdamond_pid = ctx->kdamond->pid;
- return 0;
+ return damon_call(ctx, &call_control);
}
static int damon_reclaim_enabled_store(const char *val,
@@ -296,34 +325,6 @@ module_param_cb(enabled, &enabled_param_ops, &enabled, 0600);
MODULE_PARM_DESC(enabled,
"Enable or disable DAMON_RECLAIM (default: disabled)");
-static int damon_reclaim_handle_commit_inputs(void)
-{
- int err;
-
- if (!commit_inputs)
- return 0;
-
- err = damon_reclaim_apply_parameters();
- commit_inputs = false;
- return err;
-}
-
-static int damon_reclaim_after_aggregation(struct damon_ctx *c)
-{
- struct damos *s;
-
- /* update the stats parameter */
- damon_for_each_scheme(s, c)
- damon_reclaim_stat = s->stat;
-
- return damon_reclaim_handle_commit_inputs();
-}
-
-static int damon_reclaim_after_wmarks_check(struct damon_ctx *c)
-{
- return damon_reclaim_handle_commit_inputs();
-}
-
static int __init damon_reclaim_init(void)
{
int err = damon_modules_new_paddr_ctx_target(&ctx, &target);
@@ -331,8 +332,7 @@ static int __init damon_reclaim_init(void)
if (err)
goto out;
- ctx->callback.after_wmarks_check = damon_reclaim_after_wmarks_check;
- ctx->callback.after_aggregation = damon_reclaim_after_aggregation;
+ call_control.data = ctx;
/* 'enabled' has set before this function, probably via command line */
if (enabled)
--
2.39.5
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 05/14] mm/damon/lru_sort: use damon_call() repeat mode instead of damon_callback
2025-07-12 19:50 [PATCH 00/14] mm/damon: remove damon_callback SeongJae Park
` (3 preceding siblings ...)
2025-07-12 19:50 ` [PATCH 04/14] mm/damon/reclaim: " SeongJae Park
@ 2025-07-12 19:50 ` SeongJae Park
2025-07-12 19:50 ` [PATCH 06/14] samples/damon/prcl: " SeongJae Park
` (8 subsequent siblings)
13 siblings, 0 replies; 15+ messages in thread
From: SeongJae Park @ 2025-07-12 19:50 UTC (permalink / raw)
To: Andrew Morton; +Cc: SeongJae Park, damon, kernel-team, linux-kernel, linux-mm
DAMON_LRU_SORT uses damon_callback for periodically reading and writing
DAMON internal data and parameters. Use its alternative, damon_call()
repeat mode.
Signed-off-by: SeongJae Park <sj@kernel.org>
---
mm/damon/lru_sort.c | 70 ++++++++++++++++++++++-----------------------
1 file changed, 35 insertions(+), 35 deletions(-)
diff --git a/mm/damon/lru_sort.c b/mm/damon/lru_sort.c
index dcac775eaa11..621f19546596 100644
--- a/mm/damon/lru_sort.c
+++ b/mm/damon/lru_sort.c
@@ -230,6 +230,39 @@ static int damon_lru_sort_apply_parameters(void)
return err;
}
+static int damon_lru_sort_handle_commit_inputs(void)
+{
+ int err;
+
+ if (!commit_inputs)
+ return 0;
+
+ err = damon_lru_sort_apply_parameters();
+ commit_inputs = false;
+ return err;
+}
+
+static int damon_lru_sort_damon_call_fn(void *arg)
+{
+ struct damon_ctx *c = arg;
+ struct damos *s;
+
+ /* update the stats parameter */
+ damon_for_each_scheme(s, c) {
+ if (s->action == DAMOS_LRU_PRIO)
+ damon_lru_sort_hot_stat = s->stat;
+ else if (s->action == DAMOS_LRU_DEPRIO)
+ damon_lru_sort_cold_stat = s->stat;
+ }
+
+ return damon_lru_sort_handle_commit_inputs();
+}
+
+static struct damon_call_control call_control = {
+ .fn = damon_lru_sort_damon_call_fn,
+ .repeat = true,
+};
+
static int damon_lru_sort_turn(bool on)
{
int err;
@@ -249,7 +282,7 @@ static int damon_lru_sort_turn(bool on)
if (err)
return err;
kdamond_pid = ctx->kdamond->pid;
- return 0;
+ return damon_call(ctx, &call_control);
}
static int damon_lru_sort_enabled_store(const char *val,
@@ -288,38 +321,6 @@ module_param_cb(enabled, &enabled_param_ops, &enabled, 0600);
MODULE_PARM_DESC(enabled,
"Enable or disable DAMON_LRU_SORT (default: disabled)");
-static int damon_lru_sort_handle_commit_inputs(void)
-{
- int err;
-
- if (!commit_inputs)
- return 0;
-
- err = damon_lru_sort_apply_parameters();
- commit_inputs = false;
- return err;
-}
-
-static int damon_lru_sort_after_aggregation(struct damon_ctx *c)
-{
- struct damos *s;
-
- /* update the stats parameter */
- damon_for_each_scheme(s, c) {
- if (s->action == DAMOS_LRU_PRIO)
- damon_lru_sort_hot_stat = s->stat;
- else if (s->action == DAMOS_LRU_DEPRIO)
- damon_lru_sort_cold_stat = s->stat;
- }
-
- return damon_lru_sort_handle_commit_inputs();
-}
-
-static int damon_lru_sort_after_wmarks_check(struct damon_ctx *c)
-{
- return damon_lru_sort_handle_commit_inputs();
-}
-
static int __init damon_lru_sort_init(void)
{
int err = damon_modules_new_paddr_ctx_target(&ctx, &target);
@@ -327,8 +328,7 @@ static int __init damon_lru_sort_init(void)
if (err)
goto out;
- ctx->callback.after_wmarks_check = damon_lru_sort_after_wmarks_check;
- ctx->callback.after_aggregation = damon_lru_sort_after_aggregation;
+ call_control.data = ctx;
/* 'enabled' has set before this function, probably via command line */
if (enabled)
--
2.39.5
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 06/14] samples/damon/prcl: use damon_call() repeat mode instead of damon_callback
2025-07-12 19:50 [PATCH 00/14] mm/damon: remove damon_callback SeongJae Park
` (4 preceding siblings ...)
2025-07-12 19:50 ` [PATCH 05/14] mm/damon/lru_sort: " SeongJae Park
@ 2025-07-12 19:50 ` SeongJae Park
2025-07-12 19:50 ` [PATCH 07/14] samples/damon/wsse: " SeongJae Park
` (7 subsequent siblings)
13 siblings, 0 replies; 15+ messages in thread
From: SeongJae Park @ 2025-07-12 19:50 UTC (permalink / raw)
To: Andrew Morton; +Cc: SeongJae Park, damon, kernel-team, linux-kernel, linux-mm
prcl uses damon_callback for periodically reading DAMON internal data.
Use its alternative, damon_call() repeat mode.
Signed-off-by: SeongJae Park <sj@kernel.org>
---
samples/damon/prcl.c | 18 ++++++++++++++----
1 file changed, 14 insertions(+), 4 deletions(-)
diff --git a/samples/damon/prcl.c b/samples/damon/prcl.c
index 8a312dba7691..25a751a67b2d 100644
--- a/samples/damon/prcl.c
+++ b/samples/damon/prcl.c
@@ -34,8 +34,9 @@ MODULE_PARM_DESC(enabled, "Enable or disable DAMON_SAMPLE_PRCL");
static struct damon_ctx *ctx;
static struct pid *target_pidp;
-static int damon_sample_prcl_after_aggregate(struct damon_ctx *c)
+static int damon_sample_prcl_repeat_call_fn(void *data)
{
+ struct damon_ctx *c = data;
struct damon_target *t;
damon_for_each_target(t, c) {
@@ -51,10 +52,16 @@ static int damon_sample_prcl_after_aggregate(struct damon_ctx *c)
return 0;
}
+static struct damon_call_control repeat_call_control = {
+ .fn = damon_sample_prcl_repeat_call_fn,
+ .repeat = true,
+};
+
static int damon_sample_prcl_start(void)
{
struct damon_target *target;
struct damos *scheme;
+ int err;
pr_info("start\n");
@@ -79,8 +86,6 @@ static int damon_sample_prcl_start(void)
}
target->pid = target_pidp;
- ctx->callback.after_aggregation = damon_sample_prcl_after_aggregate;
-
scheme = damon_new_scheme(
&(struct damos_access_pattern) {
.min_sz_region = PAGE_SIZE,
@@ -100,7 +105,12 @@ static int damon_sample_prcl_start(void)
}
damon_set_schemes(ctx, &scheme, 1);
- return damon_start(&ctx, 1, true);
+ err = damon_start(&ctx, 1, true);
+ if (err)
+ return err;
+
+ repeat_call_control.data = ctx;
+ return damon_call(ctx, &repeat_call_control);
}
static void damon_sample_prcl_stop(void)
--
2.39.5
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 07/14] samples/damon/wsse: use damon_call() repeat mode instead of damon_callback
2025-07-12 19:50 [PATCH 00/14] mm/damon: remove damon_callback SeongJae Park
` (5 preceding siblings ...)
2025-07-12 19:50 ` [PATCH 06/14] samples/damon/prcl: " SeongJae Park
@ 2025-07-12 19:50 ` SeongJae Park
2025-07-12 19:50 ` [PATCH 08/14] mm/damon/core: do not call ops.cleanup() when destroying targets SeongJae Park
` (6 subsequent siblings)
13 siblings, 0 replies; 15+ messages in thread
From: SeongJae Park @ 2025-07-12 19:50 UTC (permalink / raw)
To: Andrew Morton; +Cc: SeongJae Park, damon, kernel-team, linux-kernel, linux-mm
wsse uses damon_callback for periodically reading DAMON internal data.
Use its alternative, damon_call() repeat mode.
Signed-off-by: SeongJae Park <sj@kernel.org>
---
samples/damon/wsse.c | 16 +++++++++++++---
1 file changed, 13 insertions(+), 3 deletions(-)
diff --git a/samples/damon/wsse.c b/samples/damon/wsse.c
index d87b3b0801d2..a250e86b24a5 100644
--- a/samples/damon/wsse.c
+++ b/samples/damon/wsse.c
@@ -35,8 +35,9 @@ MODULE_PARM_DESC(enabled, "Enable or disable DAMON_SAMPLE_WSSE");
static struct damon_ctx *ctx;
static struct pid *target_pidp;
-static int damon_sample_wsse_after_aggregate(struct damon_ctx *c)
+static int damon_sample_wsse_repeat_call_fn(void *data)
{
+ struct damon_ctx *c = data;
struct damon_target *t;
damon_for_each_target(t, c) {
@@ -52,9 +53,15 @@ static int damon_sample_wsse_after_aggregate(struct damon_ctx *c)
return 0;
}
+static struct damon_call_control repeat_call_control = {
+ .fn = damon_sample_wsse_repeat_call_fn,
+ .repeat = true,
+};
+
static int damon_sample_wsse_start(void)
{
struct damon_target *target;
+ int err;
pr_info("start\n");
@@ -79,8 +86,11 @@ static int damon_sample_wsse_start(void)
}
target->pid = target_pidp;
- ctx->callback.after_aggregation = damon_sample_wsse_after_aggregate;
- return damon_start(&ctx, 1, true);
+ err = damon_start(&ctx, 1, true);
+ if (err)
+ return err;
+ repeat_call_control.data = ctx;
+ return damon_call(ctx, &repeat_call_control);
}
static void damon_sample_wsse_stop(void)
--
2.39.5
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 08/14] mm/damon/core: do not call ops.cleanup() when destroying targets
2025-07-12 19:50 [PATCH 00/14] mm/damon: remove damon_callback SeongJae Park
` (6 preceding siblings ...)
2025-07-12 19:50 ` [PATCH 07/14] samples/damon/wsse: " SeongJae Park
@ 2025-07-12 19:50 ` SeongJae Park
2025-07-12 19:50 ` [PATCH 09/14] mm/damon/core: add cleanup_target() ops callback SeongJae Park
` (5 subsequent siblings)
13 siblings, 0 replies; 15+ messages in thread
From: SeongJae Park @ 2025-07-12 19:50 UTC (permalink / raw)
To: Andrew Morton; +Cc: SeongJae Park, damon, kernel-team, linux-kernel, linux-mm
damon_operations.cleanup() is documented to be called for kdamond
termination, but also being called for targets destruction, which is
done for any damon_ctx destruction. Nobody is using the callback for
now, though. Remove the cleanup() call under the destruction.
Signed-off-by: SeongJae Park <sj@kernel.org>
---
mm/damon/core.c | 5 -----
1 file changed, 5 deletions(-)
diff --git a/mm/damon/core.c b/mm/damon/core.c
index ffb87497dbb5..b82a838b5a0e 100644
--- a/mm/damon/core.c
+++ b/mm/damon/core.c
@@ -550,11 +550,6 @@ static void damon_destroy_targets(struct damon_ctx *ctx)
{
struct damon_target *t, *next_t;
- if (ctx->ops.cleanup) {
- ctx->ops.cleanup(ctx);
- return;
- }
-
damon_for_each_target_safe(t, next_t, ctx)
damon_destroy_target(t);
}
--
2.39.5
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 09/14] mm/damon/core: add cleanup_target() ops callback
2025-07-12 19:50 [PATCH 00/14] mm/damon: remove damon_callback SeongJae Park
` (7 preceding siblings ...)
2025-07-12 19:50 ` [PATCH 08/14] mm/damon/core: do not call ops.cleanup() when destroying targets SeongJae Park
@ 2025-07-12 19:50 ` SeongJae Park
2025-07-12 19:50 ` [PATCH 10/14] mm/damon/vaddr: put pid in cleanup_target() SeongJae Park
` (4 subsequent siblings)
13 siblings, 0 replies; 15+ messages in thread
From: SeongJae Park @ 2025-07-12 19:50 UTC (permalink / raw)
To: Andrew Morton; +Cc: SeongJae Park, damon, kernel-team, linux-kernel, linux-mm
Some DAMON operation sets may need additional cleanup per target. For
example, [f]vaddr need to put pids of each target. Each user and core
logic is doing that redundantly. Add another DAMON ops callback that
will be used for doing such cleanups in operations set layer.
Signed-off-by: SeongJae Park <sj@kernel.org>
---
include/linux/damon.h | 4 +++-
mm/damon/core.c | 12 ++++++++----
mm/damon/sysfs.c | 4 ++--
mm/damon/tests/core-kunit.h | 4 ++--
mm/damon/tests/vaddr-kunit.h | 2 +-
5 files changed, 16 insertions(+), 10 deletions(-)
diff --git a/include/linux/damon.h b/include/linux/damon.h
index b83987275ff9..27305d39f600 100644
--- a/include/linux/damon.h
+++ b/include/linux/damon.h
@@ -608,6 +608,7 @@ enum damon_ops_id {
* filters (&struct damos_filter) that handled by itself.
* @target_valid should check whether the target is still valid for the
* monitoring.
+ * @cleanup_target is called before the target will be deallocated.
* @cleanup is called from @kdamond just before its termination.
*/
struct damon_operations {
@@ -623,6 +624,7 @@ struct damon_operations {
struct damon_target *t, struct damon_region *r,
struct damos *scheme, unsigned long *sz_filter_passed);
bool (*target_valid)(struct damon_target *t);
+ void (*cleanup_target)(struct damon_target *t);
void (*cleanup)(struct damon_ctx *context);
};
@@ -933,7 +935,7 @@ struct damon_target *damon_new_target(void);
void damon_add_target(struct damon_ctx *ctx, struct damon_target *t);
bool damon_targets_empty(struct damon_ctx *ctx);
void damon_free_target(struct damon_target *t);
-void damon_destroy_target(struct damon_target *t);
+void damon_destroy_target(struct damon_target *t, struct damon_ctx *ctx);
unsigned int damon_nr_regions(struct damon_target *t);
struct damon_ctx *damon_new_ctx(void);
diff --git a/mm/damon/core.c b/mm/damon/core.c
index b82a838b5a0e..678c9b4e038c 100644
--- a/mm/damon/core.c
+++ b/mm/damon/core.c
@@ -502,8 +502,12 @@ void damon_free_target(struct damon_target *t)
kfree(t);
}
-void damon_destroy_target(struct damon_target *t)
+void damon_destroy_target(struct damon_target *t, struct damon_ctx *ctx)
{
+
+ if (ctx && ctx->ops.cleanup_target)
+ ctx->ops.cleanup_target(t);
+
damon_del_target(t);
damon_free_target(t);
}
@@ -551,7 +555,7 @@ static void damon_destroy_targets(struct damon_ctx *ctx)
struct damon_target *t, *next_t;
damon_for_each_target_safe(t, next_t, ctx)
- damon_destroy_target(t);
+ damon_destroy_target(t, ctx);
}
void damon_destroy_ctx(struct damon_ctx *ctx)
@@ -1137,7 +1141,7 @@ static int damon_commit_targets(
if (damon_target_has_pid(dst))
put_pid(dst_target->pid);
- damon_destroy_target(dst_target);
+ damon_destroy_target(dst_target, dst);
damon_for_each_scheme(s, dst) {
if (s->quota.charge_target_from == dst_target) {
s->quota.charge_target_from = NULL;
@@ -1156,7 +1160,7 @@ static int damon_commit_targets(
err = damon_commit_target(new_target, false,
src_target, damon_target_has_pid(src));
if (err) {
- damon_destroy_target(new_target);
+ damon_destroy_target(new_target, NULL);
return err;
}
damon_add_target(dst, new_target);
diff --git a/mm/damon/sysfs.c b/mm/damon/sysfs.c
index c0193de6fb9a..f2f9f756f5a2 100644
--- a/mm/damon/sysfs.c
+++ b/mm/damon/sysfs.c
@@ -1303,7 +1303,7 @@ static void damon_sysfs_destroy_targets(struct damon_ctx *ctx)
damon_for_each_target_safe(t, next, ctx) {
if (has_pid)
put_pid(t->pid);
- damon_destroy_target(t);
+ damon_destroy_target(t, ctx);
}
}
@@ -1389,7 +1389,7 @@ static void damon_sysfs_before_terminate(struct damon_ctx *ctx)
damon_for_each_target_safe(t, next, ctx) {
put_pid(t->pid);
- damon_destroy_target(t);
+ damon_destroy_target(t, ctx);
}
}
diff --git a/mm/damon/tests/core-kunit.h b/mm/damon/tests/core-kunit.h
index 298c67557fae..dfedfff19940 100644
--- a/mm/damon/tests/core-kunit.h
+++ b/mm/damon/tests/core-kunit.h
@@ -58,7 +58,7 @@ static void damon_test_target(struct kunit *test)
damon_add_target(c, t);
KUNIT_EXPECT_EQ(test, 1u, nr_damon_targets(c));
- damon_destroy_target(t);
+ damon_destroy_target(t, c);
KUNIT_EXPECT_EQ(test, 0u, nr_damon_targets(c));
damon_destroy_ctx(c);
@@ -310,7 +310,7 @@ static void damon_test_set_regions(struct kunit *test)
KUNIT_EXPECT_EQ(test, r->ar.start, expects[expect_idx++]);
KUNIT_EXPECT_EQ(test, r->ar.end, expects[expect_idx++]);
}
- damon_destroy_target(t);
+ damon_destroy_target(t, NULL);
}
static void damon_test_nr_accesses_to_accesses_bp(struct kunit *test)
diff --git a/mm/damon/tests/vaddr-kunit.h b/mm/damon/tests/vaddr-kunit.h
index 7cd944266a92..d2b37ccf2cc0 100644
--- a/mm/damon/tests/vaddr-kunit.h
+++ b/mm/damon/tests/vaddr-kunit.h
@@ -149,7 +149,7 @@ static void damon_do_test_apply_three_regions(struct kunit *test,
KUNIT_EXPECT_EQ(test, r->ar.end, expected[i * 2 + 1]);
}
- damon_destroy_target(t);
+ damon_destroy_target(t, NULL);
}
/*
--
2.39.5
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 10/14] mm/damon/vaddr: put pid in cleanup_target()
2025-07-12 19:50 [PATCH 00/14] mm/damon: remove damon_callback SeongJae Park
` (8 preceding siblings ...)
2025-07-12 19:50 ` [PATCH 09/14] mm/damon/core: add cleanup_target() ops callback SeongJae Park
@ 2025-07-12 19:50 ` SeongJae Park
2025-07-12 19:50 ` [PATCH 11/14] mm/damon/sysfs: remove damon_sysfs_destroy_targets() SeongJae Park
` (3 subsequent siblings)
13 siblings, 0 replies; 15+ messages in thread
From: SeongJae Park @ 2025-07-12 19:50 UTC (permalink / raw)
To: Andrew Morton; +Cc: SeongJae Park, damon, kernel-team, linux-kernel, linux-mm
Implement cleanup_target() callback for [f]vaddr, which calls put_pid()
for each target that will be destroyed. Also remove redundant put_pid()
calls in core, sysfs and sample modules, which were required to be done
redundantly due to the lack of such self cleanup in vaddr.
Signed-off-by: SeongJae Park <sj@kernel.org>
---
mm/damon/core.c | 2 --
mm/damon/sysfs.c | 10 ++--------
mm/damon/vaddr.c | 6 ++++++
samples/damon/prcl.c | 2 --
samples/damon/wsse.c | 2 --
5 files changed, 8 insertions(+), 14 deletions(-)
diff --git a/mm/damon/core.c b/mm/damon/core.c
index 678c9b4e038c..9554743dc992 100644
--- a/mm/damon/core.c
+++ b/mm/damon/core.c
@@ -1139,8 +1139,6 @@ static int damon_commit_targets(
} else {
struct damos *s;
- if (damon_target_has_pid(dst))
- put_pid(dst_target->pid);
damon_destroy_target(dst_target, dst);
damon_for_each_scheme(s, dst) {
if (s->quota.charge_target_from == dst_target) {
diff --git a/mm/damon/sysfs.c b/mm/damon/sysfs.c
index f2f9f756f5a2..5eba6ac53939 100644
--- a/mm/damon/sysfs.c
+++ b/mm/damon/sysfs.c
@@ -1298,13 +1298,9 @@ static int damon_sysfs_set_attrs(struct damon_ctx *ctx,
static void damon_sysfs_destroy_targets(struct damon_ctx *ctx)
{
struct damon_target *t, *next;
- bool has_pid = damon_target_has_pid(ctx);
- damon_for_each_target_safe(t, next, ctx) {
- if (has_pid)
- put_pid(t->pid);
+ damon_for_each_target_safe(t, next, ctx)
damon_destroy_target(t, ctx);
- }
}
static int damon_sysfs_set_regions(struct damon_target *t,
@@ -1387,10 +1383,8 @@ static void damon_sysfs_before_terminate(struct damon_ctx *ctx)
if (!damon_target_has_pid(ctx))
return;
- damon_for_each_target_safe(t, next, ctx) {
- put_pid(t->pid);
+ damon_for_each_target_safe(t, next, ctx)
damon_destroy_target(t, ctx);
- }
}
/*
diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c
index 7f5dc9c221a0..94af19c4dfed 100644
--- a/mm/damon/vaddr.c
+++ b/mm/damon/vaddr.c
@@ -805,6 +805,11 @@ static bool damon_va_target_valid(struct damon_target *t)
return false;
}
+static void damon_va_cleanup_target(struct damon_target *t)
+{
+ put_pid(t->pid);
+}
+
#ifndef CONFIG_ADVISE_SYSCALLS
static unsigned long damos_madvise(struct damon_target *target,
struct damon_region *r, int behavior)
@@ -946,6 +951,7 @@ static int __init damon_va_initcall(void)
.prepare_access_checks = damon_va_prepare_access_checks,
.check_accesses = damon_va_check_accesses,
.target_valid = damon_va_target_valid,
+ .cleanup_target = damon_va_cleanup_target,
.cleanup = NULL,
.apply_scheme = damon_va_apply_scheme,
.get_scheme_score = damon_va_scheme_score,
diff --git a/samples/damon/prcl.c b/samples/damon/prcl.c
index 25a751a67b2d..1b839c06a612 100644
--- a/samples/damon/prcl.c
+++ b/samples/damon/prcl.c
@@ -120,8 +120,6 @@ static void damon_sample_prcl_stop(void)
damon_stop(&ctx, 1);
damon_destroy_ctx(ctx);
}
- if (target_pidp)
- put_pid(target_pidp);
}
static bool init_called;
diff --git a/samples/damon/wsse.c b/samples/damon/wsse.c
index a250e86b24a5..da052023b099 100644
--- a/samples/damon/wsse.c
+++ b/samples/damon/wsse.c
@@ -100,8 +100,6 @@ static void damon_sample_wsse_stop(void)
damon_stop(&ctx, 1);
damon_destroy_ctx(ctx);
}
- if (target_pidp)
- put_pid(target_pidp);
}
static bool init_called;
--
2.39.5
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 11/14] mm/damon/sysfs: remove damon_sysfs_destroy_targets()
2025-07-12 19:50 [PATCH 00/14] mm/damon: remove damon_callback SeongJae Park
` (9 preceding siblings ...)
2025-07-12 19:50 ` [PATCH 10/14] mm/damon/vaddr: put pid in cleanup_target() SeongJae Park
@ 2025-07-12 19:50 ` SeongJae Park
2025-07-12 19:50 ` [PATCH 12/14] mm/damon/core: destroy targets when kdamond_fn() finish SeongJae Park
` (2 subsequent siblings)
13 siblings, 0 replies; 15+ messages in thread
From: SeongJae Park @ 2025-07-12 19:50 UTC (permalink / raw)
To: Andrew Morton; +Cc: SeongJae Park, damon, kernel-team, linux-kernel, linux-mm
The function was introduced for putting pids and deallocating
unnecessary targets. Hence it is called before damon_destroy_ctx().
Now vaddr puts pid for each target destruction (cleanup_target()).
damon_destroy_ctx() deallocates the targets anyway. So
damon_sysfs_destroy_targets() has no reason to exist. Remove it.
Signed-off-by: SeongJae Park <sj@kernel.org>
---
mm/damon/sysfs.c | 23 +++--------------------
1 file changed, 3 insertions(+), 20 deletions(-)
diff --git a/mm/damon/sysfs.c b/mm/damon/sysfs.c
index 5eba6ac53939..b0f7c60d655a 100644
--- a/mm/damon/sysfs.c
+++ b/mm/damon/sysfs.c
@@ -1295,14 +1295,6 @@ static int damon_sysfs_set_attrs(struct damon_ctx *ctx,
return damon_set_attrs(ctx, &attrs);
}
-static void damon_sysfs_destroy_targets(struct damon_ctx *ctx)
-{
- struct damon_target *t, *next;
-
- damon_for_each_target_safe(t, next, ctx)
- damon_destroy_target(t, ctx);
-}
-
static int damon_sysfs_set_regions(struct damon_target *t,
struct damon_sysfs_regions *sysfs_regions)
{
@@ -1337,7 +1329,6 @@ static int damon_sysfs_add_target(struct damon_sysfs_target *sys_target,
struct damon_ctx *ctx)
{
struct damon_target *t = damon_new_target();
- int err = -EINVAL;
if (!t)
return -ENOMEM;
@@ -1345,16 +1336,10 @@ static int damon_sysfs_add_target(struct damon_sysfs_target *sys_target,
if (damon_target_has_pid(ctx)) {
t->pid = find_get_pid(sys_target->pid);
if (!t->pid)
- goto destroy_targets_out;
+ /* caller will destroy targets */
+ return -EINVAL;
}
- err = damon_sysfs_set_regions(t, sys_target->regions);
- if (err)
- goto destroy_targets_out;
- return 0;
-
-destroy_targets_out:
- damon_sysfs_destroy_targets(ctx);
- return err;
+ return damon_sysfs_set_regions(t, sys_target->regions);
}
static int damon_sysfs_add_targets(struct damon_ctx *ctx,
@@ -1458,13 +1443,11 @@ static int damon_sysfs_commit_input(void *data)
test_ctx = damon_new_ctx();
err = damon_commit_ctx(test_ctx, param_ctx);
if (err) {
- damon_sysfs_destroy_targets(test_ctx);
damon_destroy_ctx(test_ctx);
goto out;
}
err = damon_commit_ctx(kdamond->damon_ctx, param_ctx);
out:
- damon_sysfs_destroy_targets(param_ctx);
damon_destroy_ctx(param_ctx);
return err;
}
--
2.39.5
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 12/14] mm/damon/core: destroy targets when kdamond_fn() finish
2025-07-12 19:50 [PATCH 00/14] mm/damon: remove damon_callback SeongJae Park
` (10 preceding siblings ...)
2025-07-12 19:50 ` [PATCH 11/14] mm/damon/sysfs: remove damon_sysfs_destroy_targets() SeongJae Park
@ 2025-07-12 19:50 ` SeongJae Park
2025-07-12 19:50 ` [PATCH 13/14] mm/damon/sysfs: remove damon_sysfs_before_terminate() SeongJae Park
2025-07-12 19:50 ` [PATCH 14/14] mm/damon/core: remove damon_callback SeongJae Park
13 siblings, 0 replies; 15+ messages in thread
From: SeongJae Park @ 2025-07-12 19:50 UTC (permalink / raw)
To: Andrew Morton; +Cc: SeongJae Park, damon, kernel-team, linux-kernel, linux-mm
When kdamond_fn() completes, the targets are kept. Those are kept to
let callers do additional cleanups if they need. There are no such
additional cleanups though. DAMON sysfs interface deallocates those in
before_terminate() callback, to reduce unnecessary memory usage, for
[f]vaddr use case. Just destroy the targets for every case in the core
layer. This saves more memory and simplifies the logic.
Signed-off-by: SeongJae Park <sj@kernel.org>
---
mm/damon/core.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/mm/damon/core.c b/mm/damon/core.c
index 9554743dc992..ffd1a061c2cb 100644
--- a/mm/damon/core.c
+++ b/mm/damon/core.c
@@ -2657,6 +2657,7 @@ static int kdamond_fn(void *data)
running_exclusive_ctxs = false;
mutex_unlock(&damon_lock);
+ damon_destroy_targets(ctx);
return 0;
}
--
2.39.5
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 13/14] mm/damon/sysfs: remove damon_sysfs_before_terminate()
2025-07-12 19:50 [PATCH 00/14] mm/damon: remove damon_callback SeongJae Park
` (11 preceding siblings ...)
2025-07-12 19:50 ` [PATCH 12/14] mm/damon/core: destroy targets when kdamond_fn() finish SeongJae Park
@ 2025-07-12 19:50 ` SeongJae Park
2025-07-12 19:50 ` [PATCH 14/14] mm/damon/core: remove damon_callback SeongJae Park
13 siblings, 0 replies; 15+ messages in thread
From: SeongJae Park @ 2025-07-12 19:50 UTC (permalink / raw)
To: Andrew Morton; +Cc: SeongJae Park, damon, kernel-team, linux-kernel, linux-mm
DAMON core layer does target cleanup on its own. Remove duplicated and
unnecessarily selective cleanup attempts in DAMON sysfs interface.
Signed-off-by: SeongJae Park <sj@kernel.org>
---
mm/damon/sysfs.c | 12 ------------
1 file changed, 12 deletions(-)
diff --git a/mm/damon/sysfs.c b/mm/damon/sysfs.c
index b0f7c60d655a..cce2c8a296e2 100644
--- a/mm/damon/sysfs.c
+++ b/mm/damon/sysfs.c
@@ -1361,17 +1361,6 @@ static int damon_sysfs_add_targets(struct damon_ctx *ctx,
return 0;
}
-static void damon_sysfs_before_terminate(struct damon_ctx *ctx)
-{
- struct damon_target *t, *next;
-
- if (!damon_target_has_pid(ctx))
- return;
-
- damon_for_each_target_safe(t, next, ctx)
- damon_destroy_target(t, ctx);
-}
-
/*
* damon_sysfs_upd_schemes_stats() - Update schemes stats sysfs files.
* @data: The kobject wrapper that associated to the kdamond thread.
@@ -1516,7 +1505,6 @@ static struct damon_ctx *damon_sysfs_build_ctx(
return ERR_PTR(err);
}
- ctx->callback.before_terminate = damon_sysfs_before_terminate;
return ctx;
}
--
2.39.5
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 14/14] mm/damon/core: remove damon_callback
2025-07-12 19:50 [PATCH 00/14] mm/damon: remove damon_callback SeongJae Park
` (12 preceding siblings ...)
2025-07-12 19:50 ` [PATCH 13/14] mm/damon/sysfs: remove damon_sysfs_before_terminate() SeongJae Park
@ 2025-07-12 19:50 ` SeongJae Park
13 siblings, 0 replies; 15+ messages in thread
From: SeongJae Park @ 2025-07-12 19:50 UTC (permalink / raw)
To: Andrew Morton; +Cc: SeongJae Park, damon, kernel-team, linux-kernel, linux-mm
All damon_callback usages are replicated by damon_call() and
damos_walk(). Time to say goodbye. Remove damon_callback.
Signed-off-by: SeongJae Park <sj@kernel.org>
---
include/linux/damon.h | 31 +------------------------------
mm/damon/core.c | 26 +++++++-------------------
2 files changed, 8 insertions(+), 49 deletions(-)
diff --git a/include/linux/damon.h b/include/linux/damon.h
index 27305d39f600..34fc5407f98e 100644
--- a/include/linux/damon.h
+++ b/include/linux/damon.h
@@ -628,34 +628,6 @@ struct damon_operations {
void (*cleanup)(struct damon_ctx *context);
};
-/**
- * struct damon_callback - Monitoring events notification callbacks.
- *
- * @after_wmarks_check: Called after each schemes' watermarks check.
- * @after_aggregation: Called after each aggregation.
- * @before_terminate: Called before terminating the monitoring.
- *
- * The monitoring thread (&damon_ctx.kdamond) calls @before_terminate just
- * before finishing the monitoring.
- *
- * The monitoring thread calls @after_wmarks_check after each DAMON-based
- * operation schemes' watermarks check. If users need to make changes to the
- * attributes of the monitoring context while it's deactivated due to the
- * watermarks, this is the good place to do.
- *
- * The monitoring thread calls @after_aggregation for each of the aggregation
- * intervals. Therefore, users can safely access the monitoring results
- * without additional protection. For the reason, users are recommended to use
- * these callback for the accesses to the results.
- *
- * If any callback returns non-zero, monitoring stops.
- */
-struct damon_callback {
- int (*after_wmarks_check)(struct damon_ctx *context);
- int (*after_aggregation)(struct damon_ctx *context);
- void (*before_terminate)(struct damon_ctx *context);
-};
-
/*
* struct damon_call_control - Control damon_call().
*
@@ -726,7 +698,7 @@ struct damon_intervals_goal {
* ``mmap()`` calls from the application, in case of virtual memory monitoring)
* and applies the changes for each @ops_update_interval. All time intervals
* are in micro-seconds. Please refer to &struct damon_operations and &struct
- * damon_callback for more detail.
+ * damon_call_control for more detail.
*/
struct damon_attrs {
unsigned long sample_interval;
@@ -816,7 +788,6 @@ struct damon_ctx {
struct mutex kdamond_lock;
struct damon_operations ops;
- struct damon_callback callback;
struct list_head adaptive_targets;
struct list_head schemes;
diff --git a/mm/damon/core.c b/mm/damon/core.c
index ffd1a061c2cb..f3ec3bd736ec 100644
--- a/mm/damon/core.c
+++ b/mm/damon/core.c
@@ -680,9 +680,7 @@ static bool damon_valid_intervals_goal(struct damon_attrs *attrs)
* @attrs: monitoring attributes
*
* This function should be called while the kdamond is not running, an access
- * check results aggregation is not ongoing (e.g., from &struct
- * damon_callback->after_aggregation or &struct
- * damon_callback->after_wmarks_check callbacks), or from damon_call().
+ * check results aggregation is not ongoing (e.g., from damon_call().
*
* Every time interval is in micro-seconds.
*
@@ -778,7 +776,7 @@ static void damos_commit_quota_goal(
* DAMON contexts, instead of manual in-place updates.
*
* This function should be called from parameters-update safe context, like
- * DAMON callbacks.
+ * damon_call().
*/
int damos_commit_quota_goals(struct damos_quota *dst, struct damos_quota *src)
{
@@ -1177,7 +1175,7 @@ static int damon_commit_targets(
* in-place updates.
*
* This function should be called from parameters-update safe context, like
- * DAMON callbacks.
+ * damon_call().
*/
int damon_commit_ctx(struct damon_ctx *dst, struct damon_ctx *src)
{
@@ -2484,9 +2482,6 @@ static int kdamond_wait_activation(struct damon_ctx *ctx)
kdamond_usleep(min_wait_time);
- if (ctx->callback.after_wmarks_check &&
- ctx->callback.after_wmarks_check(ctx))
- break;
kdamond_call(ctx, false);
damos_walk_cancel(ctx);
}
@@ -2543,10 +2538,9 @@ static int kdamond_fn(void *data)
while (!kdamond_need_stop(ctx)) {
/*
* ctx->attrs and ctx->next_{aggregation,ops_update}_sis could
- * be changed from after_wmarks_check() or after_aggregation()
- * callbacks. Read the values here, and use those for this
- * iteration. That is, damon_set_attrs() updated new values
- * are respected from next iteration.
+ * be changed from kdamond_call(). Read the values here, and
+ * use those for this iteration. That is, damon_set_attrs()
+ * updated new values are respected from next iteration.
*/
unsigned long next_aggregation_sis = ctx->next_aggregation_sis;
unsigned long next_ops_update_sis = ctx->next_ops_update_sis;
@@ -2564,14 +2558,10 @@ static int kdamond_fn(void *data)
if (ctx->ops.check_accesses)
max_nr_accesses = ctx->ops.check_accesses(ctx);
- if (ctx->passed_sample_intervals >= next_aggregation_sis) {
+ if (ctx->passed_sample_intervals >= next_aggregation_sis)
kdamond_merge_regions(ctx,
max_nr_accesses / 10,
sz_limit);
- if (ctx->callback.after_aggregation &&
- ctx->callback.after_aggregation(ctx))
- break;
- }
/*
* do kdamond_call() and kdamond_apply_schemes() after
@@ -2637,8 +2627,6 @@ static int kdamond_fn(void *data)
damon_destroy_region(r, t);
}
- if (ctx->callback.before_terminate)
- ctx->callback.before_terminate(ctx);
if (ctx->ops.cleanup)
ctx->ops.cleanup(ctx);
kfree(ctx->regions_score_histogram);
--
2.39.5
^ permalink raw reply related [flat|nested] 15+ messages in thread
end of thread, other threads:[~2025-07-12 19:51 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-12 19:50 [PATCH 00/14] mm/damon: remove damon_callback SeongJae Park
2025-07-12 19:50 ` [PATCH 01/14] mm/damon: accept parallel damon_call() requests SeongJae Park
2025-07-12 19:50 ` [PATCH 02/14] mm/damon/core: introduce repeat mode damon_call() SeongJae Park
2025-07-12 19:50 ` [PATCH 03/14] mm/damon/stat: use damon_call() repeat mode instead of damon_callback SeongJae Park
2025-07-12 19:50 ` [PATCH 04/14] mm/damon/reclaim: " SeongJae Park
2025-07-12 19:50 ` [PATCH 05/14] mm/damon/lru_sort: " SeongJae Park
2025-07-12 19:50 ` [PATCH 06/14] samples/damon/prcl: " SeongJae Park
2025-07-12 19:50 ` [PATCH 07/14] samples/damon/wsse: " SeongJae Park
2025-07-12 19:50 ` [PATCH 08/14] mm/damon/core: do not call ops.cleanup() when destroying targets SeongJae Park
2025-07-12 19:50 ` [PATCH 09/14] mm/damon/core: add cleanup_target() ops callback SeongJae Park
2025-07-12 19:50 ` [PATCH 10/14] mm/damon/vaddr: put pid in cleanup_target() SeongJae Park
2025-07-12 19:50 ` [PATCH 11/14] mm/damon/sysfs: remove damon_sysfs_destroy_targets() SeongJae Park
2025-07-12 19:50 ` [PATCH 12/14] mm/damon/core: destroy targets when kdamond_fn() finish SeongJae Park
2025-07-12 19:50 ` [PATCH 13/14] mm/damon/sysfs: remove damon_sysfs_before_terminate() SeongJae Park
2025-07-12 19:50 ` [PATCH 14/14] mm/damon/core: remove damon_callback SeongJae Park
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).