From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6E7E0157A4F; Tue, 23 Jul 2024 18:26:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721759181; cv=none; b=mBuGf7EJ/Zb3lB3rVr9jaqMjPbLfoDkXWfgSR8kQb/ni3TR5bNJEZSI/hMEreSxN1wzgVEMPr25ekzusomCC0dnzi0jg82aquOKP8gGTRKejvS7VGS158KoX1ctKF1yAleorOTBNyCHcD8fcsCfKixCPqLxKNHerQYQBjtsOCJU= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721759181; c=relaxed/simple; bh=sWXUf5VvJYNZ7tEbi2ZmMi9NXx9DWdIfFRWf8ZMHras=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=E9Q3iKgxIVXAKiU2bIRtrCCZxp22wJWrjmwRw/AWfj64iJHo1J6cfloWxqM54NWBQdDMEc9QN9AdOvEQdefQP1JQmPNGLObY0xWafrqJ97bR+PnPu9nH+FtTmlJX3/Df/gIm/vgSVEJ+QgdJeMFVZQPkBnaArUo0cQNbBNIEy0A= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b=dWjmNn+g; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b="dWjmNn+g" Received: by smtp.kernel.org (Postfix) with ESMTPSA id DF40EC4AF09; Tue, 23 Jul 2024 18:26:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1721759181; bh=sWXUf5VvJYNZ7tEbi2ZmMi9NXx9DWdIfFRWf8ZMHras=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dWjmNn+gseugDQ9BgLBpa+YLUWtALTTYmxKkobsgiY31anPObBfGYHhyTldE5TqC0 60IKH+xcwOI3a+5/OC54MMrY5INLTJb97DwHX7VihyQm2YKsuK0rL5FZBqMdRWBqRt OgHNbkMOlWwqYqSbkcOsBPsBgOzqjZG4tlSZCrH0= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, SeongJae Park , Andrew Morton Subject: [PATCH 6.1 007/105] mm/damon/core: merge regions aggressively when max_nr_regions is unmet Date: Tue, 23 Jul 2024 20:22:44 +0200 Message-ID: <20240723180402.944675581@linuxfoundation.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240723180402.490567226@linuxfoundation.org> References: <20240723180402.490567226@linuxfoundation.org> User-Agent: quilt/0.67 X-stable: review X-Patchwork-Hint: ignore Precedence: bulk X-Mailing-List: stable@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit 6.1-stable review patch. If anyone has any objections, please let me know. ------------------ From: SeongJae Park commit 310d6c15e9104c99d5d9d0ff8e5383a79da7d5e6 upstream. DAMON keeps the number of regions under max_nr_regions by skipping regions split operations when doing so can make the number higher than the limit. It works well for preventing violation of the limit. But, if somehow the violation happens, it cannot recovery well depending on the situation. In detail, if the real number of regions having different access pattern is higher than the limit, the mechanism cannot reduce the number below the limit. In such a case, the system could suffer from high monitoring overhead of DAMON. The violation can actually happen. For an example, the user could reduce max_nr_regions while DAMON is running, to be lower than the current number of regions. Fix the problem by repeating the merge operations with increasing aggressiveness in kdamond_merge_regions() for the case, until the limit is met. [sj@kernel.org: increase regions merge aggressiveness while respecting min_nr_regions] Link: https://lkml.kernel.org/r/20240626164753.46270-1-sj@kernel.org [sj@kernel.org: ensure max threshold attempt for max_nr_regions violation] Link: https://lkml.kernel.org/r/20240627163153.75969-1-sj@kernel.org Link: https://lkml.kernel.org/r/20240624175814.89611-1-sj@kernel.org Fixes: b9a6ac4e4ede ("mm/damon: adaptively adjust regions") Signed-off-by: SeongJae Park Cc: [5.15+] Signed-off-by: Andrew Morton (cherry picked from commit 310d6c15e9104c99d5d9d0ff8e5383a79da7d5e6) Signed-off-by: Greg Kroah-Hartman --- mm/damon/core.c | 21 +++++++++++++++++++-- 1 file changed, 19 insertions(+), 2 deletions(-) --- a/mm/damon/core.c +++ b/mm/damon/core.c @@ -921,14 +921,31 @@ static void damon_merge_regions_of(struc * access frequencies are similar. This is for minimizing the monitoring * overhead under the dynamically changeable access pattern. If a merge was * unnecessarily made, later 'kdamond_split_regions()' will revert it. + * + * The total number of regions could be higher than the user-defined limit, + * max_nr_regions for some cases. For example, the user can update + * max_nr_regions to a number that lower than the current number of regions + * while DAMON is running. For such a case, repeat merging until the limit is + * met while increasing @threshold up to possible maximum level. */ static void kdamond_merge_regions(struct damon_ctx *c, unsigned int threshold, unsigned long sz_limit) { struct damon_target *t; + unsigned int nr_regions; + unsigned int max_thres; - damon_for_each_target(t, c) - damon_merge_regions_of(t, threshold, sz_limit); + max_thres = c->attrs.aggr_interval / + (c->attrs.sample_interval ? c->attrs.sample_interval : 1); + do { + nr_regions = 0; + damon_for_each_target(t, c) { + damon_merge_regions_of(t, threshold, sz_limit); + nr_regions += damon_nr_regions(t); + } + threshold = max(1, threshold * 2); + } while (nr_regions > c->attrs.max_nr_regions && + threshold / 2 < max_thres); } /*