* [PATCH] mm/damon/core: optimize kdamond_apply_schemes() with pre-filtered scheme list
@ 2026-03-22 22:56 Josh Law
2026-03-23 14:20 ` SeongJae Park
0 siblings, 1 reply; 4+ messages in thread
From: Josh Law @ 2026-03-22 22:56 UTC (permalink / raw)
To: sj, akpm; +Cc: damon, linux-mm, linux-kernel, Josh Law
Currently, kdamond_apply_schemes() iterates over all targets and regions
for every scheme in the context, even if the scheme is inactive due to
watermarks or hasn't reached its next apply interval.
This patch introduces a pre-filtered list of active schemes at the start
of kdamond_apply_schemes(). By only iterating over schemes that actually
need to be applied in the current interval, we significantly reduce the
overhead of the nested target/region loops.
This optimization maintains the original Target -> Region -> Scheme
behavior while providing substantial performance gains, especially when
many schemes are inactive.
Performance Benchmarks (Filtered Array vs Original):
| Scenario Description | Speedup |
|---------------------------|---------|
| Mostly Inactive (2/10) | 7.5x |
| Half Active (5/10) | 2.9x |
| All Active (10/10) | 1.3x |
Signed-off-by: Josh Law <objecting@objecting.org>
---
mm/damon/core.c | 28 +++++++++++++++-------------
1 file changed, 15 insertions(+), 13 deletions(-)
diff --git a/mm/damon/core.c b/mm/damon/core.c
index c884bb31c9b8..3b59e72defd4 100644
--- a/mm/damon/core.c
+++ b/mm/damon/core.c
@@ -2114,19 +2114,16 @@ static void damos_apply_scheme(struct damon_ctx *c, struct damon_target *t,
static void damon_do_apply_schemes(struct damon_ctx *c,
struct damon_target *t,
- struct damon_region *r)
+ struct damon_region *r,
+ struct damos **active_schemes,
+ int nr_active_schemes)
{
- struct damos *s;
+ int i;
- damon_for_each_scheme(s, c) {
+ for (i = 0; i < nr_active_schemes; i++) {
+ struct damos *s = active_schemes[i];
struct damos_quota *quota = &s->quota;
- if (time_before(c->passed_sample_intervals, s->next_apply_sis))
- continue;
-
- if (!s->wmarks.activated)
- continue;
-
/* Check the quota */
if (quota->esz && quota->charged_sz >= quota->esz)
continue;
@@ -2476,7 +2473,8 @@ static void kdamond_apply_schemes(struct damon_ctx *c)
struct damon_target *t;
struct damon_region *r;
struct damos *s;
- bool has_schemes_to_apply = false;
+ struct damos *active_schemes[32];
+ int nr_active_schemes = 0;
damon_for_each_scheme(s, c) {
if (time_before(c->passed_sample_intervals, s->next_apply_sis))
@@ -2485,12 +2483,15 @@ static void kdamond_apply_schemes(struct damon_ctx *c)
if (!s->wmarks.activated)
continue;
- has_schemes_to_apply = true;
+ if (nr_active_schemes < ARRAY_SIZE(active_schemes))
+ active_schemes[nr_active_schemes++] = s;
+ else
+ WARN_ONCE(1, "too many schemes to apply");
damos_adjust_quota(c, s);
}
- if (!has_schemes_to_apply)
+ if (!nr_active_schemes)
return;
mutex_lock(&c->walk_control_lock);
@@ -2499,7 +2500,8 @@ static void kdamond_apply_schemes(struct damon_ctx *c)
continue;
damon_for_each_region(r, t)
- damon_do_apply_schemes(c, t, r);
+ damon_do_apply_schemes(c, t, r, active_schemes,
+ nr_active_schemes);
}
damon_for_each_scheme(s, c) {
--
2.34.1
^ permalink raw reply related [flat|nested] 4+ messages in thread* Re: [PATCH] mm/damon/core: optimize kdamond_apply_schemes() with pre-filtered scheme list
2026-03-22 22:56 [PATCH] mm/damon/core: optimize kdamond_apply_schemes() with pre-filtered scheme list Josh Law
@ 2026-03-23 14:20 ` SeongJae Park
2026-03-23 15:18 ` Josh Law
0 siblings, 1 reply; 4+ messages in thread
From: SeongJae Park @ 2026-03-23 14:20 UTC (permalink / raw)
To: Josh Law; +Cc: SeongJae Park, akpm, damon, linux-mm, linux-kernel
On Sun, 22 Mar 2026 22:56:27 +0000 Josh Law <objecting@objecting.org> wrote:
> Currently, kdamond_apply_schemes() iterates over all targets and regions
> for every scheme in the context, even if the scheme is inactive due to
> watermarks or hasn't reached its next apply interval.
>
> This patch introduces a pre-filtered list of active schemes at the start
> of kdamond_apply_schemes(). By only iterating over schemes that actually
> need to be applied in the current interval, we significantly reduce the
> overhead of the nested target/region loops.
>
> This optimization maintains the original Target -> Region -> Scheme
> behavior while providing substantial performance gains, especially when
> many schemes are inactive.
>
> Performance Benchmarks (Filtered Array vs Original):
> | Scenario Description | Speedup |
> |---------------------------|---------|
> | Mostly Inactive (2/10) | 7.5x |
> | Half Active (5/10) | 2.9x |
> | All Active (10/10) | 1.3x |
>
> Signed-off-by: Josh Law <objecting@objecting.org>
> ---
> mm/damon/core.c | 28 +++++++++++++++-------------
> 1 file changed, 15 insertions(+), 13 deletions(-)
>
> diff --git a/mm/damon/core.c b/mm/damon/core.c
> index c884bb31c9b8..3b59e72defd4 100644
> --- a/mm/damon/core.c
> +++ b/mm/damon/core.c
> @@ -2114,19 +2114,16 @@ static void damos_apply_scheme(struct damon_ctx *c, struct damon_target *t,
>
> static void damon_do_apply_schemes(struct damon_ctx *c,
> struct damon_target *t,
> - struct damon_region *r)
> + struct damon_region *r,
> + struct damos **active_schemes,
> + int nr_active_schemes)
> {
> - struct damos *s;
> + int i;
>
> - damon_for_each_scheme(s, c) {
> + for (i = 0; i < nr_active_schemes; i++) {
> + struct damos *s = active_schemes[i];
> struct damos_quota *quota = &s->quota;
>
> - if (time_before(c->passed_sample_intervals, s->next_apply_sis))
> - continue;
> -
> - if (!s->wmarks.activated)
> - continue;
> -
> /* Check the quota */
> if (quota->esz && quota->charged_sz >= quota->esz)
> continue;
> @@ -2476,7 +2473,8 @@ static void kdamond_apply_schemes(struct damon_ctx *c)
> struct damon_target *t;
> struct damon_region *r;
> struct damos *s;
> - bool has_schemes_to_apply = false;
> + struct damos *active_schemes[32];
> + int nr_active_schemes = 0;
>
> damon_for_each_scheme(s, c) {
> if (time_before(c->passed_sample_intervals, s->next_apply_sis))
> @@ -2485,12 +2483,15 @@ static void kdamond_apply_schemes(struct damon_ctx *c)
> if (!s->wmarks.activated)
> continue;
>
> - has_schemes_to_apply = true;
> + if (nr_active_schemes < ARRAY_SIZE(active_schemes))
> + active_schemes[nr_active_schemes++] = s;
> + else
> + WARN_ONCE(1, "too many schemes to apply");
We may need to increase the size of the array insted of just warning. That
will make this code little bit more complicated. I'm worried at maintenance
burden from such a complicated code more than the benefit of the optimized
performance here.
If this is real bottleneck that bothers real users, we should optimize this
even if it makes code dirty and more difficult to maintain. But, at the moment
it is unclear if this is a real bottleneck.
I'd suggest to hold this for now, and revisit if this becomes clearly a
bottleneck of a real use case, e.g., a user claims so.
Thanks,
SJ
[...]
^ permalink raw reply [flat|nested] 4+ messages in thread* Re: [PATCH] mm/damon/core: optimize kdamond_apply_schemes() with pre-filtered scheme list
2026-03-23 14:20 ` SeongJae Park
@ 2026-03-23 15:18 ` Josh Law
2026-03-24 0:27 ` SeongJae Park
0 siblings, 1 reply; 4+ messages in thread
From: Josh Law @ 2026-03-23 15:18 UTC (permalink / raw)
To: SeongJae Park; +Cc: akpm, damon, linux-mm, linux-kernel
On 23 March 2026 14:20:51 GMT, SeongJae Park <sj@kernel.org> wrote:
>On Sun, 22 Mar 2026 22:56:27 +0000 Josh Law <objecting@objecting.org> wrote:
>
>> Currently, kdamond_apply_schemes() iterates over all targets and regions
>> for every scheme in the context, even if the scheme is inactive due to
>> watermarks or hasn't reached its next apply interval.
>>
>> This patch introduces a pre-filtered list of active schemes at the start
>> of kdamond_apply_schemes(). By only iterating over schemes that actually
>> need to be applied in the current interval, we significantly reduce the
>> overhead of the nested target/region loops.
>>
>> This optimization maintains the original Target -> Region -> Scheme
>> behavior while providing substantial performance gains, especially when
>> many schemes are inactive.
>>
>> Performance Benchmarks (Filtered Array vs Original):
>> | Scenario Description | Speedup |
>> |---------------------------|---------|
>> | Mostly Inactive (2/10) | 7.5x |
>> | Half Active (5/10) | 2.9x |
>> | All Active (10/10) | 1.3x |
>>
>> Signed-off-by: Josh Law <objecting@objecting.org>
>> ---
>> mm/damon/core.c | 28 +++++++++++++++-------------
>> 1 file changed, 15 insertions(+), 13 deletions(-)
>>
>> diff --git a/mm/damon/core.c b/mm/damon/core.c
>> index c884bb31c9b8..3b59e72defd4 100644
>> --- a/mm/damon/core.c
>> +++ b/mm/damon/core.c
>> @@ -2114,19 +2114,16 @@ static void damos_apply_scheme(struct damon_ctx *c, struct damon_target *t,
>>
>> static void damon_do_apply_schemes(struct damon_ctx *c,
>> struct damon_target *t,
>> - struct damon_region *r)
>> + struct damon_region *r,
>> + struct damos **active_schemes,
>> + int nr_active_schemes)
>> {
>> - struct damos *s;
>> + int i;
>>
>> - damon_for_each_scheme(s, c) {
>> + for (i = 0; i < nr_active_schemes; i++) {
>> + struct damos *s = active_schemes[i];
>> struct damos_quota *quota = &s->quota;
>>
>> - if (time_before(c->passed_sample_intervals, s->next_apply_sis))
>> - continue;
>> -
>> - if (!s->wmarks.activated)
>> - continue;
>> -
>> /* Check the quota */
>> if (quota->esz && quota->charged_sz >= quota->esz)
>> continue;
>> @@ -2476,7 +2473,8 @@ static void kdamond_apply_schemes(struct damon_ctx *c)
>> struct damon_target *t;
>> struct damon_region *r;
>> struct damos *s;
>> - bool has_schemes_to_apply = false;
>> + struct damos *active_schemes[32];
>> + int nr_active_schemes = 0;
>>
>> damon_for_each_scheme(s, c) {
>> if (time_before(c->passed_sample_intervals, s->next_apply_sis))
>> @@ -2485,12 +2483,15 @@ static void kdamond_apply_schemes(struct damon_ctx *c)
>> if (!s->wmarks.activated)
>> continue;
>>
>> - has_schemes_to_apply = true;
>> + if (nr_active_schemes < ARRAY_SIZE(active_schemes))
>> + active_schemes[nr_active_schemes++] = s;
>> + else
>> + WARN_ONCE(1, "too many schemes to apply");
>
>We may need to increase the size of the array insted of just warning. That
>will make this code little bit more complicated. I'm worried at maintenance
>burden from such a complicated code more than the benefit of the optimized
>performance here.
>
>If this is real bottleneck that bothers real users, we should optimize this
>even if it makes code dirty and more difficult to maintain. But, at the moment
>it is unclear if this is a real bottleneck.
>
>I'd suggest to hold this for now, and revisit if this becomes clearly a
>bottleneck of a real use case, e.g., a user claims so.
>
>
>Thanks,
>SJ
>
>[...]
Yep, ill shelve this at the moment. Thanks for the clarification
V/R
Josh Law
^ permalink raw reply [flat|nested] 4+ messages in thread* Re: [PATCH] mm/damon/core: optimize kdamond_apply_schemes() with pre-filtered scheme list
2026-03-23 15:18 ` Josh Law
@ 2026-03-24 0:27 ` SeongJae Park
0 siblings, 0 replies; 4+ messages in thread
From: SeongJae Park @ 2026-03-24 0:27 UTC (permalink / raw)
To: Josh Law; +Cc: SeongJae Park, akpm, damon, linux-mm, linux-kernel
On Mon, 23 Mar 2026 15:18:16 +0000 Josh Law <objecting@objecting.org> wrote:
>
>
> On 23 March 2026 14:20:51 GMT, SeongJae Park <sj@kernel.org> wrote:
> >On Sun, 22 Mar 2026 22:56:27 +0000 Josh Law <objecting@objecting.org> wrote:
[...]
> >I'd suggest to hold this for now, and revisit if this becomes clearly a
> >bottleneck of a real use case, e.g., a user claims so.
> >
> >
> >Thanks,
> >SJ
> >
> >[...]
>
>
> Yep, ill shelve this at the moment. Thanks for the clarification
Thank you for kindly accepting my suggestion, Josh.
Thanks,
SJ
[...]
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2026-03-24 0:28 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-22 22:56 [PATCH] mm/damon/core: optimize kdamond_apply_schemes() with pre-filtered scheme list Josh Law
2026-03-23 14:20 ` SeongJae Park
2026-03-23 15:18 ` Josh Law
2026-03-24 0:27 ` SeongJae Park
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox