From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5B5183D9DD5; Thu, 9 Apr 2026 14:21:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775744518; cv=none; b=AuhCBKPMGjduasZjTyFLcaILw16J8RlHQaWuJoojqs6qXIh1ipOsAtls5v0hWGq22+4/355Ttx2OD9dXjq6kNpSeP4jhXiAm7RIGqdUKwMW8Dqr2aH49TukSoc4y1MfzHCNIGjEGr8wljKzFdSIqVDY3GxVnUUybOJQCbIDFD2M= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775744518; c=relaxed/simple; bh=RDGyHZ5IvHxpF2pat4DZ9fNQzQ+auTQEfUiu8Hjau5A=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=bLhhaYELKZc4wCNao8HMZFnZtGU8lJZTiRnf1Hv8pzW1WUZW9zOuSbFBlfqPpVFDi5zrqvAFeGK673odUuqOPF0M5QQe3brw0nOpaAekhcRXv4i8NXequrK+1fwu1KWqCXK5RRjz+RjK/Nk1p2hhqa8Cp2Apyvn2jl+2c5zmxM8= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=SVfJPkjw; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="SVfJPkjw" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 00108C2BCB6; Thu, 9 Apr 2026 14:21:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1775744518; bh=RDGyHZ5IvHxpF2pat4DZ9fNQzQ+auTQEfUiu8Hjau5A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SVfJPkjwteZ/uQ8aiGsD36acpad5x9EuM7S0aDblTDji435AVshSCmYg4GdsLIrmU uEe81l092mwj7f1HOIKSWLyyXpYzDnE6h5+B3IL6XZryAUX3i3LX0zloz/55JqJZnP SrSRbTSGJgoLUuT74E9ujRTK4Ispndc1cuENWsB6CwJdNwNf/9sPnlzxzE6T/b3pqJ IMT4yHCzXtq/nE3gNO4Oybc1VCbcTXVQuppsiTcEEUM9fy+stO5JGgPMAgaAPP68Gn TwhYwcHTxpRs47bFOGIaZ7jIgjmt5nE47gYjrnEvDBbJCLmswU1yNPxgMIj3PnlUtN hJFoEaShCMCfQ== From: SeongJae Park To: Cc: SeongJae Park , Andrew Morton , damon@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [RFC PATCH v4 02/11] mm/damon/core: merge quota-sliced regions back Date: Thu, 9 Apr 2026 07:21:37 -0700 Message-ID: <20260409142148.60652-3-sj@kernel.org> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20260409142148.60652-1-sj@kernel.org> References: <20260409142148.60652-1-sj@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit damos_apply_scheme() splits the given region to avoid applying action to the region results in applying action to more than the quota limit memory. When such a split happens multiple times, theoretically that could result in violating the user-set max_nr_regions limit. If that happens, DAMON could impose overhead larger than the max_nr_regions based expectation. Such cases should be rare since usually the real number of DAMON regions are much lower than the max_nr_regions. Even if it happens, because the split operations could be made only up to the number of schemes per scheme apply interval, the impact will be negligible. The impact could be higher after the following commit, though. The following commit will allow the action-failed region to be charged in a different (less than region size) ratio. As a result, the split operation could be made more frequently in a corner case. Still it is a theoretical corner case. But it would still be better to be avoided unless it causes other issues, as max_nr_regions is one of the important user parameters. Avoid the violation and resulting overhead by merging the sliced regions back, as soon as the schemes handling for the slices are done. Signed-off-by: SeongJae Park --- mm/damon/core.c | 32 ++++++++++++++++++++++++++++---- 1 file changed, 28 insertions(+), 4 deletions(-) diff --git a/mm/damon/core.c b/mm/damon/core.c index c7d05d2385fe8..98ee776d98cd0 100644 --- a/mm/damon/core.c +++ b/mm/damon/core.c @@ -2159,6 +2159,33 @@ static void damon_do_apply_schemes(struct damon_ctx *c, } } +static void damos_apply_target(struct damon_ctx *c, struct damon_target *t) +{ + struct damon_region *r, *next, *orig_region = NULL; + unsigned long orig_end_addr; + + damon_for_each_region_safe(r, next, t) { + /* + * damon_do_apply_schemes() split the region if applying the + * action to the whole region can make quota exceeded. That + * split can result in DAMON snapshot having more than + * max_nr_regions regions. + * + * Merge back the sliced regions to the original region, as + * soon as the schemes-handling of the slice is completed. + */ + if (!orig_region || orig_end_addr <= r->ar.start) { + orig_region = r; + orig_end_addr = r->ar.end; + } + damon_do_apply_schemes(c, t, r); + if (r == orig_region) + continue; + orig_region->ar.end = r->ar.end; + damon_destroy_region(r, t); + } +} + /* * damon_feed_loop_next_input() - get next input to achieve a target score. * @last_input The last input. @@ -2528,7 +2555,6 @@ static void damos_trace_stat(struct damon_ctx *c, struct damos *s) static void kdamond_apply_schemes(struct damon_ctx *c) { struct damon_target *t; - struct damon_region *r; struct damos *s; bool has_schemes_to_apply = false; @@ -2551,9 +2577,7 @@ static void kdamond_apply_schemes(struct damon_ctx *c) damon_for_each_target(t, c) { if (c->ops.target_valid && c->ops.target_valid(t) == false) continue; - - damon_for_each_region(r, t) - damon_do_apply_schemes(c, t, r); + damos_apply_target(c, t); } damon_for_each_scheme(s, c) { -- 2.47.3