From: Breno Leitao <leitao@debian.org>
To: Andrew Morton <akpm@linux-foundation.org>,
David Hildenbrand <david@kernel.org>,
Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
Zi Yan <ziy@nvidia.com>,
Baolin Wang <baolin.wang@linux.alibaba.com>,
"Liam R. Howlett" <Liam.Howlett@oracle.com>,
Nico Pache <npache@redhat.com>,
Ryan Roberts <ryan.roberts@arm.com>, Dev Jain <dev.jain@arm.com>,
Barry Song <baohua@kernel.org>,
Lance Yang <lance.yang@linux.dev>,
Vlastimil Babka <vbabka@kernel.org>,
Suren Baghdasaryan <surenb@google.com>,
Michal Hocko <mhocko@suse.com>,
Brendan Jackman <jackmanb@google.com>,
Johannes Weiner <hannes@cmpxchg.org>,
Mike Rapoport <rppt@kernel.org>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
usamaarif642@gmail.com, kas@kernel.org, kernel-team@meta.com,
Breno Leitao <leitao@debian.org>,
"Lorenzo Stoakes (Oracle)" <ljs@kernel.org>
Subject: [PATCH v4 2/4] mm: huge_memory: refactor anon_enabled_store() with change_anon_orders()
Date: Mon, 09 Mar 2026 04:07:31 -0700 [thread overview]
Message-ID: <20260309-thp_logs-v4-2-926b9840083e@debian.org> (raw)
In-Reply-To: <20260309-thp_logs-v4-0-926b9840083e@debian.org>
Consolidate the repeated spin_lock/set_bit/clear_bit pattern in
anon_enabled_store() into a new change_anon_orders() helper that
loops over an orders[] array, setting the bit for the selected mode
and clearing the others.
Introduce enum anon_enabled_mode and anon_enabled_mode_strings[]
for the per-order anon THP setting.
Use sysfs_match_string() with the anon_enabled_mode_strings[] table
to replace the if/else chain of sysfs_streq() calls.
The helper uses test_and_set_bit()/test_and_clear_bit() to track
whether the state actually changed, so start_stop_khugepaged() is
only called when needed. When the mode is unchanged,
set_recommended_min_free_kbytes() is called directly to preserve
the watermark recalculation behavior of the original code.
Signed-off-by: Breno Leitao <leitao@debian.org>
Reviewed-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
---
mm/huge_memory.c | 84 +++++++++++++++++++++++++++++++++++---------------------
1 file changed, 52 insertions(+), 32 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 8e2746ea74adf..2d5b05a416dab 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -316,6 +316,20 @@ static ssize_t enabled_show(struct kobject *kobj,
return sysfs_emit(buf, "%s\n", output);
}
+enum anon_enabled_mode {
+ ANON_ENABLED_ALWAYS = 0,
+ ANON_ENABLED_MADVISE = 1,
+ ANON_ENABLED_INHERIT = 2,
+ ANON_ENABLED_NEVER = 3,
+};
+
+static const char * const anon_enabled_mode_strings[] = {
+ [ANON_ENABLED_ALWAYS] = "always",
+ [ANON_ENABLED_MADVISE] = "madvise",
+ [ANON_ENABLED_INHERIT] = "inherit",
+ [ANON_ENABLED_NEVER] = "never",
+};
+
static ssize_t enabled_store(struct kobject *kobj,
struct kobj_attribute *attr,
const char *buf, size_t count)
@@ -515,48 +529,54 @@ static ssize_t anon_enabled_show(struct kobject *kobj,
return sysfs_emit(buf, "%s\n", output);
}
+static bool change_anon_orders(int order, enum anon_enabled_mode mode)
+{
+ static unsigned long *orders[] = {
+ &huge_anon_orders_always,
+ &huge_anon_orders_madvise,
+ &huge_anon_orders_inherit,
+ };
+ enum anon_enabled_mode m;
+ bool changed = false;
+
+ spin_lock(&huge_anon_orders_lock);
+ for (m = 0; m < ARRAY_SIZE(orders); m++) {
+ if (m == mode)
+ changed |= !test_and_set_bit(order, orders[m]);
+ else
+ changed |= test_and_clear_bit(order, orders[m]);
+ }
+ spin_unlock(&huge_anon_orders_lock);
+
+ return changed;
+}
+
static ssize_t anon_enabled_store(struct kobject *kobj,
struct kobj_attribute *attr,
const char *buf, size_t count)
{
int order = to_thpsize(kobj)->order;
- ssize_t ret = count;
+ int mode;
- if (sysfs_streq(buf, "always")) {
- spin_lock(&huge_anon_orders_lock);
- clear_bit(order, &huge_anon_orders_inherit);
- clear_bit(order, &huge_anon_orders_madvise);
- set_bit(order, &huge_anon_orders_always);
- spin_unlock(&huge_anon_orders_lock);
- } else if (sysfs_streq(buf, "inherit")) {
- spin_lock(&huge_anon_orders_lock);
- clear_bit(order, &huge_anon_orders_always);
- clear_bit(order, &huge_anon_orders_madvise);
- set_bit(order, &huge_anon_orders_inherit);
- spin_unlock(&huge_anon_orders_lock);
- } else if (sysfs_streq(buf, "madvise")) {
- spin_lock(&huge_anon_orders_lock);
- clear_bit(order, &huge_anon_orders_always);
- clear_bit(order, &huge_anon_orders_inherit);
- set_bit(order, &huge_anon_orders_madvise);
- spin_unlock(&huge_anon_orders_lock);
- } else if (sysfs_streq(buf, "never")) {
- spin_lock(&huge_anon_orders_lock);
- clear_bit(order, &huge_anon_orders_always);
- clear_bit(order, &huge_anon_orders_inherit);
- clear_bit(order, &huge_anon_orders_madvise);
- spin_unlock(&huge_anon_orders_lock);
- } else
- ret = -EINVAL;
+ mode = sysfs_match_string(anon_enabled_mode_strings, buf);
+ if (mode < 0)
+ return -EINVAL;
- if (ret > 0) {
- int err;
+ if (change_anon_orders(order, mode)) {
+ int err = start_stop_khugepaged();
- err = start_stop_khugepaged();
if (err)
- ret = err;
+ return err;
+ } else {
+ /*
+ * Recalculate watermarks even when the mode didn't
+ * change, as the previous code always called
+ * start_stop_khugepaged() which does this internally.
+ */
+ set_recommended_min_free_kbytes();
}
- return ret;
+
+ return count;
}
static struct kobj_attribute anon_enabled_attr =
--
2.47.3
next prev parent reply other threads:[~2026-03-09 11:08 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-09 11:07 [PATCH v4 0/4] mm: thp: reduce unnecessary start_stop_khugepaged() calls Breno Leitao
2026-03-09 11:07 ` [PATCH v4 1/4] mm: khugepaged: export set_recommended_min_free_kbytes() Breno Leitao
2026-03-09 13:30 ` David Hildenbrand (Arm)
2026-03-09 11:07 ` Breno Leitao [this message]
2026-03-09 13:43 ` [PATCH v4 2/4] mm: huge_memory: refactor anon_enabled_store() with change_anon_orders() David Hildenbrand (Arm)
2026-03-10 16:31 ` Breno Leitao
2026-03-09 11:07 ` [PATCH v4 3/4] mm: huge_memory: refactor enabled_store() with change_enabled() Breno Leitao
2026-03-09 13:45 ` David Hildenbrand (Arm)
2026-03-09 11:07 ` [PATCH v4 4/4] mm: ratelimit min_free_kbytes adjustment messages Breno Leitao
2026-03-09 13:33 ` Lorenzo Stoakes (Oracle)
2026-03-09 13:46 ` David Hildenbrand (Arm)
2026-03-10 3:02 ` Baolin Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260309-thp_logs-v4-2-926b9840083e@debian.org \
--to=leitao@debian.org \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=david@kernel.org \
--cc=dev.jain@arm.com \
--cc=hannes@cmpxchg.org \
--cc=jackmanb@google.com \
--cc=kas@kernel.org \
--cc=kernel-team@meta.com \
--cc=lance.yang@linux.dev \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ljs@kernel.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=mhocko@suse.com \
--cc=npache@redhat.com \
--cc=rppt@kernel.org \
--cc=ryan.roberts@arm.com \
--cc=surenb@google.com \
--cc=usamaarif642@gmail.com \
--cc=vbabka@kernel.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.