public inbox for linux-mm@kvack.org
 help / color / mirror / Atom feed
* [PATCH v6 0/4] mm: thp: reduce unnecessary start_stop_khugepaged() calls
@ 2026-03-11 10:17 Breno Leitao
  2026-03-11 10:17 ` [PATCH v6 1/4] mm: khugepaged: export set_recommended_min_free_kbytes() Breno Leitao
                   ` (3 more replies)
  0 siblings, 4 replies; 15+ messages in thread
From: Breno Leitao @ 2026-03-11 10:17 UTC (permalink / raw)
  To: Andrew Morton, David Hildenbrand, Lorenzo Stoakes, Zi Yan,
	Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
	Barry Song, Lance Yang, Vlastimil Babka, Suren Baghdasaryan,
	Michal Hocko, Brendan Jackman, Johannes Weiner, Mike Rapoport
  Cc: linux-mm, linux-kernel, usamaarif642, kas, kernel-team,
	Lorenzo Stoakes (Oracle), Breno Leitao, Wei Yang

Writing to /sys/kernel/mm/transparent_hugepage/enabled causes
start_stop_khugepaged() called independent of any change.
start_stop_khugepaged() SPAMs the printk ring buffer overflow with the
exact same message, even when nothing changes.

For instance, if you have a custom vm.min_free_kbytes, just touching
/sys/kernel/mm/transparent_hugepage/enabled causes a printk message.
Example:

      # sysctl -w vm.min_free_kbytes=112382
      # for i in $(seq 100); do echo never > /sys/kernel/mm/transparent_hugepage/enabled ; done

and you have 100 WARN messages like the following, which is pretty dull:

      khugepaged: min_free_kbytes is not updated to 112381 because user defined value 112382 is preferred

A similar message shows up when setting thp to "always":

      # for i in $(seq 100); do
      #       echo 1024 > /proc/sys/vm/min_free_kbytes
      #       echo always > /sys/kernel/mm/transparent_hugepage/enabled
      # done

And then, we have 100 messages like:

      khugepaged: raising min_free_kbytes from 1024 to 67584 to help transparent hugepage allocations

This is more common when you have a configuration management system that
writes the THP configuration without an extra read, assuming that
nothing will happen if there is no change in the configuration, but it
prints these annoying messages.

For instance, at Meta's fleet, ~10K servers were producing 3.5M of
these messages per day.

Fix this by making the sysfs _store helpers easier to digest and
ratelimiting the message.

This version is heavily based on Lorenzo's suggestion on V1.

---
Changes in v6:
- Keep the anon_enabled_mode_strings[] ordered similarly to sysfs format (Wei)
- Link to v5: https://patch.msgid.link/20260310-thp_logs-v5-0-686099175bf6@debian.org

Changes in v5:
- Used the lockless __test_and_set_bit() primitives (David)
- Rename the functions and variables to match the code (David)
- Link to v4: https://patch.msgid.link/20260309-thp_logs-v4-0-926b9840083e@debian.org

Changes in v4:
- Use the enum instead of int in the new functions (akpm).
- Explicitly initialize the enum values (akpm).
- Link to v3: https://patch.msgid.link/20260307-thp_logs-v3-0-a45d2c8f3685@debian.org

Changes in v3:
- Extra ratelimit patch.
- Create two enums, one for anon and one for global. (Lorenzo)
- Remove the `extern` from set_recommended_min_free_kbytes (Lorenzo)
- Export set_recommended_min_free_kbytes() definition to mm/internal.h
  (Lorenzo)
- Link to v2: https://patch.msgid.link/20260305-thp_logs-v2-0-96b3ad795894@debian.org

Changes in v2:
- V2 is heavily based on Lorenzo and Kiryl feedback on v1.
- Link to v1: https://patch.msgid.link/20260304-thp_logs-v1-0-59038218a253@debian.org

---
Breno Leitao (4):
      mm: khugepaged: export set_recommended_min_free_kbytes()
      mm: huge_memory: refactor anon_enabled_store() with change_anon_orders()
      mm: huge_memory: refactor enabled_store() with change_enabled()
      mm: ratelimit min_free_kbytes adjustment messages

 mm/huge_memory.c | 147 +++++++++++++++++++++++++++++++++++++------------------
 mm/internal.h    |   5 ++
 mm/khugepaged.c  |   6 +--
 mm/page_alloc.c  |   4 +-
 4 files changed, 110 insertions(+), 52 deletions(-)
---
base-commit: 9dd5012f78d699f7a6051583dc53adeb401e28f0
change-id: 20260303-thp_logs-059d6b80f6d6

Best regards,
--  
Breno Leitao <leitao@debian.org>



^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v6 1/4] mm: khugepaged: export set_recommended_min_free_kbytes()
  2026-03-11 10:17 [PATCH v6 0/4] mm: thp: reduce unnecessary start_stop_khugepaged() calls Breno Leitao
@ 2026-03-11 10:17 ` Breno Leitao
  2026-03-11 10:17 ` [PATCH v6 2/4] mm: huge_memory: refactor anon_enabled_store() with change_anon_orders() Breno Leitao
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 15+ messages in thread
From: Breno Leitao @ 2026-03-11 10:17 UTC (permalink / raw)
  To: Andrew Morton, David Hildenbrand, Lorenzo Stoakes, Zi Yan,
	Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
	Barry Song, Lance Yang, Vlastimil Babka, Suren Baghdasaryan,
	Michal Hocko, Brendan Jackman, Johannes Weiner, Mike Rapoport
  Cc: linux-mm, linux-kernel, usamaarif642, kas, kernel-team,
	Lorenzo Stoakes (Oracle), Breno Leitao

Make set_recommended_min_free_kbytes() callable from outside
khugepaged.c by removing the static qualifier and adding a
declaration in mm/internal.h.

This allows callers that change THP settings to recalculate
watermarks without going through start_stop_khugepaged().

Suggested-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
Reviewed-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
Acked-by: David Hildenbrand (Arm) <david@kernel.org>
Reviewed-by: Zi Yan <ziy@nvidia.com>
Signed-off-by: Breno Leitao <leitao@debian.org>
---
 mm/internal.h   | 5 +++++
 mm/khugepaged.c | 2 +-
 2 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/mm/internal.h b/mm/internal.h
index cb0af847d7d99..7bd768e367793 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -623,6 +623,11 @@ int user_proactive_reclaim(char *buf,
  */
 pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned long address);
 
+/*
+ * in mm/khugepaged.c
+ */
+void set_recommended_min_free_kbytes(void);
+
 /*
  * in mm/page_alloc.c
  */
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 1dd3cfca610db..56a41c21b44c9 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -2630,7 +2630,7 @@ static int khugepaged(void *none)
 	return 0;
 }
 
-static void set_recommended_min_free_kbytes(void)
+void set_recommended_min_free_kbytes(void)
 {
 	struct zone *zone;
 	int nr_zones = 0;

-- 
2.52.0



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v6 2/4] mm: huge_memory: refactor anon_enabled_store() with change_anon_orders()
  2026-03-11 10:17 [PATCH v6 0/4] mm: thp: reduce unnecessary start_stop_khugepaged() calls Breno Leitao
  2026-03-11 10:17 ` [PATCH v6 1/4] mm: khugepaged: export set_recommended_min_free_kbytes() Breno Leitao
@ 2026-03-11 10:17 ` Breno Leitao
  2026-03-11 11:44   ` Lance Yang
  2026-03-23  9:00   ` David Hildenbrand (Arm)
  2026-03-11 10:17 ` [PATCH v6 3/4] mm: huge_memory: refactor enabled_store() with change_enabled() Breno Leitao
  2026-03-11 10:17 ` [PATCH v6 4/4] mm: ratelimit min_free_kbytes adjustment messages Breno Leitao
  3 siblings, 2 replies; 15+ messages in thread
From: Breno Leitao @ 2026-03-11 10:17 UTC (permalink / raw)
  To: Andrew Morton, David Hildenbrand, Lorenzo Stoakes, Zi Yan,
	Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
	Barry Song, Lance Yang, Vlastimil Babka, Suren Baghdasaryan,
	Michal Hocko, Brendan Jackman, Johannes Weiner, Mike Rapoport
  Cc: linux-mm, linux-kernel, usamaarif642, kas, kernel-team,
	Breno Leitao, Lorenzo Stoakes (Oracle)

Consolidate the repeated spin_lock/set_bit/clear_bit pattern in
anon_enabled_store() into a new change_anon_orders() helper that
loops over an orders[] array, setting the bit for the selected mode
and clearing the others.

Introduce enum anon_enabled_mode and anon_enabled_mode_strings[]
for the per-order anon THP setting.

Use sysfs_match_string() with the anon_enabled_mode_strings[] table
to replace the if/else chain of sysfs_streq() calls.

The helper uses test_and_set_bit()/test_and_clear_bit() to track
whether the state actually changed, so start_stop_khugepaged() is
only called when needed. When the mode is unchanged,
set_recommended_min_free_kbytes() is called directly to preserve
the watermark recalculation behavior of the original code.

Signed-off-by: Breno Leitao <leitao@debian.org>
Reviewed-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
---
 mm/huge_memory.c | 84 +++++++++++++++++++++++++++++++++++---------------------
 1 file changed, 52 insertions(+), 32 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 8e2746ea74adf..f6af90e6cf05d 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -316,6 +316,20 @@ static ssize_t enabled_show(struct kobject *kobj,
 	return sysfs_emit(buf, "%s\n", output);
 }
 
+enum anon_enabled_mode {
+	ANON_ENABLED_ALWAYS	= 0,
+	ANON_ENABLED_INHERIT	= 1,
+	ANON_ENABLED_MADVISE	= 2,
+	ANON_ENABLED_NEVER	= 3,
+};
+
+static const char * const anon_enabled_mode_strings[] = {
+	[ANON_ENABLED_ALWAYS]	= "always",
+	[ANON_ENABLED_INHERIT]	= "inherit",
+	[ANON_ENABLED_MADVISE]	= "madvise",
+	[ANON_ENABLED_NEVER]	= "never",
+};
+
 static ssize_t enabled_store(struct kobject *kobj,
 			     struct kobj_attribute *attr,
 			     const char *buf, size_t count)
@@ -515,48 +529,54 @@ static ssize_t anon_enabled_show(struct kobject *kobj,
 	return sysfs_emit(buf, "%s\n", output);
 }
 
+static bool set_anon_enabled_mode(int order, enum anon_enabled_mode mode)
+{
+	static unsigned long *enabled_orders[] = {
+		&huge_anon_orders_always,
+		&huge_anon_orders_inherit,
+		&huge_anon_orders_madvise,
+	};
+	enum anon_enabled_mode m;
+	bool changed = false;
+
+	spin_lock(&huge_anon_orders_lock);
+	for (m = 0; m < ARRAY_SIZE(enabled_orders); m++) {
+		if (m == mode)
+			changed |= !__test_and_set_bit(order, enabled_orders[m]);
+		else
+			changed |= __test_and_clear_bit(order, enabled_orders[m]);
+	}
+	spin_unlock(&huge_anon_orders_lock);
+
+	return changed;
+}
+
 static ssize_t anon_enabled_store(struct kobject *kobj,
 				  struct kobj_attribute *attr,
 				  const char *buf, size_t count)
 {
 	int order = to_thpsize(kobj)->order;
-	ssize_t ret = count;
+	int mode;
 
-	if (sysfs_streq(buf, "always")) {
-		spin_lock(&huge_anon_orders_lock);
-		clear_bit(order, &huge_anon_orders_inherit);
-		clear_bit(order, &huge_anon_orders_madvise);
-		set_bit(order, &huge_anon_orders_always);
-		spin_unlock(&huge_anon_orders_lock);
-	} else if (sysfs_streq(buf, "inherit")) {
-		spin_lock(&huge_anon_orders_lock);
-		clear_bit(order, &huge_anon_orders_always);
-		clear_bit(order, &huge_anon_orders_madvise);
-		set_bit(order, &huge_anon_orders_inherit);
-		spin_unlock(&huge_anon_orders_lock);
-	} else if (sysfs_streq(buf, "madvise")) {
-		spin_lock(&huge_anon_orders_lock);
-		clear_bit(order, &huge_anon_orders_always);
-		clear_bit(order, &huge_anon_orders_inherit);
-		set_bit(order, &huge_anon_orders_madvise);
-		spin_unlock(&huge_anon_orders_lock);
-	} else if (sysfs_streq(buf, "never")) {
-		spin_lock(&huge_anon_orders_lock);
-		clear_bit(order, &huge_anon_orders_always);
-		clear_bit(order, &huge_anon_orders_inherit);
-		clear_bit(order, &huge_anon_orders_madvise);
-		spin_unlock(&huge_anon_orders_lock);
-	} else
-		ret = -EINVAL;
+	mode = sysfs_match_string(anon_enabled_mode_strings, buf);
+	if (mode < 0)
+		return -EINVAL;
 
-	if (ret > 0) {
-		int err;
+	if (set_anon_enabled_mode(order, mode)) {
+		int err = start_stop_khugepaged();
 
-		err = start_stop_khugepaged();
 		if (err)
-			ret = err;
+			return err;
+	} else {
+		/*
+		 * Recalculate watermarks even when the mode didn't
+		 * change, as the previous code always called
+		 * start_stop_khugepaged() which does this internally.
+		 */
+		set_recommended_min_free_kbytes();
 	}
-	return ret;
+
+	return count;
 }
 
 static struct kobj_attribute anon_enabled_attr =

-- 
2.52.0



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v6 3/4] mm: huge_memory: refactor enabled_store() with change_enabled()
  2026-03-11 10:17 [PATCH v6 0/4] mm: thp: reduce unnecessary start_stop_khugepaged() calls Breno Leitao
  2026-03-11 10:17 ` [PATCH v6 1/4] mm: khugepaged: export set_recommended_min_free_kbytes() Breno Leitao
  2026-03-11 10:17 ` [PATCH v6 2/4] mm: huge_memory: refactor anon_enabled_store() with change_anon_orders() Breno Leitao
@ 2026-03-11 10:17 ` Breno Leitao
  2026-03-11 12:20   ` Lance Yang
  2026-03-13 22:31   ` Sohil Mehta
  2026-03-11 10:17 ` [PATCH v6 4/4] mm: ratelimit min_free_kbytes adjustment messages Breno Leitao
  3 siblings, 2 replies; 15+ messages in thread
From: Breno Leitao @ 2026-03-11 10:17 UTC (permalink / raw)
  To: Andrew Morton, David Hildenbrand, Lorenzo Stoakes, Zi Yan,
	Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
	Barry Song, Lance Yang, Vlastimil Babka, Suren Baghdasaryan,
	Michal Hocko, Brendan Jackman, Johannes Weiner, Mike Rapoport
  Cc: linux-mm, linux-kernel, usamaarif642, kas, kernel-team,
	Lorenzo Stoakes (Oracle), Wei Yang, Breno Leitao

Refactor enabled_store() to use a new change_enabled() helper.
Introduce a separate enum global_enabled_mode and
global_enabled_mode_strings[], mirroring the anon_enabled_mode
pattern from the previous commit.

A separate enum is necessary because the global THP setting does
not support "inherit", only "always", "madvise", and "never".
Reusing anon_enabled_mode would leave a NULL gap in the string
array, causing sysfs_match_string() to stop early and fail to
match entries after the gap.

The helper uses the same loop pattern as set_anon_enabled_mode(),
iterating over an array of flag bit positions and using
__test_and_set_bit()/__test_and_clear_bit() to track whether the state
actually changed.

Reviewed-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
Reviewed-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Reviewed-by: Wei Yang <richard.weiyang@gmail.com>
Signed-off-by: Breno Leitao <leitao@debian.org>
---
 mm/huge_memory.c | 63 ++++++++++++++++++++++++++++++++++++++++++--------------
 1 file changed, 48 insertions(+), 15 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index f6af90e6cf05d..8949393de7a51 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -330,30 +330,63 @@ static const char * const anon_enabled_mode_strings[] = {
 	[ANON_ENABLED_NEVER]	= "never",
 };
 
+enum global_enabled_mode {
+	GLOBAL_ENABLED_ALWAYS	= 0,
+	GLOBAL_ENABLED_MADVISE	= 1,
+	GLOBAL_ENABLED_NEVER	= 2,
+};
+
+static const char * const global_enabled_mode_strings[] = {
+	[GLOBAL_ENABLED_ALWAYS]		= "always",
+	[GLOBAL_ENABLED_MADVISE]	= "madvise",
+	[GLOBAL_ENABLED_NEVER]		= "never",
+};
+
+static bool set_global_enabled_mode(enum global_enabled_mode mode)
+{
+	static const unsigned long thp_flags[] = {
+		TRANSPARENT_HUGEPAGE_FLAG,
+		TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG,
+	};
+	enum global_enabled_mode m;
+	bool changed = false;
+
+	for (m = 0; m < ARRAY_SIZE(thp_flags); m++) {
+		if (m == mode)
+			changed |= !__test_and_set_bit(thp_flags[m],
+						       &transparent_hugepage_flags);
+		else
+			changed |= __test_and_clear_bit(thp_flags[m],
+							&transparent_hugepage_flags);
+	}
+
+	return changed;
+}
+
 static ssize_t enabled_store(struct kobject *kobj,
 			     struct kobj_attribute *attr,
 			     const char *buf, size_t count)
 {
-	ssize_t ret = count;
+	int mode;
 
-	if (sysfs_streq(buf, "always")) {
-		clear_bit(TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG, &transparent_hugepage_flags);
-		set_bit(TRANSPARENT_HUGEPAGE_FLAG, &transparent_hugepage_flags);
-	} else if (sysfs_streq(buf, "madvise")) {
-		clear_bit(TRANSPARENT_HUGEPAGE_FLAG, &transparent_hugepage_flags);
-		set_bit(TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG, &transparent_hugepage_flags);
-	} else if (sysfs_streq(buf, "never")) {
-		clear_bit(TRANSPARENT_HUGEPAGE_FLAG, &transparent_hugepage_flags);
-		clear_bit(TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG, &transparent_hugepage_flags);
-	} else
-		ret = -EINVAL;
+	mode = sysfs_match_string(global_enabled_mode_strings, buf);
+	if (mode < 0)
+		return -EINVAL;
 
-	if (ret > 0) {
+	if (set_global_enabled_mode(mode)) {
 		int err = start_stop_khugepaged();
+
 		if (err)
-			ret = err;
+			return err;
+	} else {
+		/*
+		 * Recalculate watermarks even when the mode didn't
+		 * change, as the previous code always called
+		 * start_stop_khugepaged() which does this internally.
+		 */
+		set_recommended_min_free_kbytes();
 	}
-	return ret;
+	return count;
 }
 
 static struct kobj_attribute enabled_attr = __ATTR_RW(enabled);

-- 
2.52.0



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v6 4/4] mm: ratelimit min_free_kbytes adjustment messages
  2026-03-11 10:17 [PATCH v6 0/4] mm: thp: reduce unnecessary start_stop_khugepaged() calls Breno Leitao
                   ` (2 preceding siblings ...)
  2026-03-11 10:17 ` [PATCH v6 3/4] mm: huge_memory: refactor enabled_store() with change_enabled() Breno Leitao
@ 2026-03-11 10:17 ` Breno Leitao
  3 siblings, 0 replies; 15+ messages in thread
From: Breno Leitao @ 2026-03-11 10:17 UTC (permalink / raw)
  To: Andrew Morton, David Hildenbrand, Lorenzo Stoakes, Zi Yan,
	Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
	Barry Song, Lance Yang, Vlastimil Babka, Suren Baghdasaryan,
	Michal Hocko, Brendan Jackman, Johannes Weiner, Mike Rapoport
  Cc: linux-mm, linux-kernel, usamaarif642, kas, kernel-team,
	Lorenzo Stoakes (Oracle), Breno Leitao

The "raising min_free_kbytes" pr_info message in
set_recommended_min_free_kbytes() and the "min_free_kbytes is not
updated to" pr_warn in calculate_min_free_kbytes() can spam the
kernel log when called repeatedly.

Switch the pr_info in set_recommended_min_free_kbytes() and the
pr_warn in calculate_min_free_kbytes() to their _ratelimited variants
to prevent the log spam for this message.

Reviewed-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
Acked-by: David Hildenbrand (Arm) <david@kernel.org>
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Acked-by: Zi Yan <ziy@nvidia.com>
Signed-off-by: Breno Leitao <leitao@debian.org>
---
 mm/khugepaged.c | 4 ++--
 mm/page_alloc.c | 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 56a41c21b44c9..d44d463ccfd3e 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -2671,8 +2671,8 @@ void set_recommended_min_free_kbytes(void)
 
 	if (recommended_min > min_free_kbytes) {
 		if (user_min_free_kbytes >= 0)
-			pr_info("raising min_free_kbytes from %d to %lu to help transparent hugepage allocations\n",
-				min_free_kbytes, recommended_min);
+			pr_info_ratelimited("raising min_free_kbytes from %d to %lu to help transparent hugepage allocations\n",
+					    min_free_kbytes, recommended_min);
 
 		min_free_kbytes = recommended_min;
 	}
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2d4b6f1a554ed..c840c886807bf 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -6553,8 +6553,8 @@ void calculate_min_free_kbytes(void)
 	if (new_min_free_kbytes > user_min_free_kbytes)
 		min_free_kbytes = clamp(new_min_free_kbytes, 128, 262144);
 	else
-		pr_warn("min_free_kbytes is not updated to %d because user defined value %d is preferred\n",
-				new_min_free_kbytes, user_min_free_kbytes);
+		pr_warn_ratelimited("min_free_kbytes is not updated to %d because user defined value %d is preferred\n",
+				    new_min_free_kbytes, user_min_free_kbytes);
 
 }
 

-- 
2.52.0



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH v6 2/4] mm: huge_memory: refactor anon_enabled_store() with change_anon_orders()
  2026-03-11 10:17 ` [PATCH v6 2/4] mm: huge_memory: refactor anon_enabled_store() with change_anon_orders() Breno Leitao
@ 2026-03-11 11:44   ` Lance Yang
  2026-03-23  9:00   ` David Hildenbrand (Arm)
  1 sibling, 0 replies; 15+ messages in thread
From: Lance Yang @ 2026-03-11 11:44 UTC (permalink / raw)
  To: Breno Leitao
  Cc: linux-mm, Mike Rapoport, Vlastimil Babka, linux-kernel, Dev Jain,
	usamaarif642, Baolin Wang, kas, Michal Hocko, Suren Baghdasaryan,
	Nico Pache, Zi Yan, kernel-team, Lorenzo Stoakes (Oracle),
	Liam R. Howlett, David Hildenbrand, Lorenzo Stoakes,
	Brendan Jackman, Andrew Morton, Barry Song, Johannes Weiner,
	Ryan Roberts



On 2026/3/11 18:17, Breno Leitao wrote:
> Consolidate the repeated spin_lock/set_bit/clear_bit pattern in
> anon_enabled_store() into a new change_anon_orders() helper that
> loops over an orders[] array, setting the bit for the selected mode
> and clearing the others.
> 
> Introduce enum anon_enabled_mode and anon_enabled_mode_strings[]
> for the per-order anon THP setting.
> 
> Use sysfs_match_string() with the anon_enabled_mode_strings[] table
> to replace the if/else chain of sysfs_streq() calls.
> 
> The helper uses test_and_set_bit()/test_and_clear_bit() to track
> whether the state actually changed, so start_stop_khugepaged() is
> only called when needed. When the mode is unchanged,
> set_recommended_min_free_kbytes() is called directly to preserve
> the watermark recalculation behavior of the original code.
> 
> Signed-off-by: Breno Leitao <leitao@debian.org>
> Reviewed-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
> ---

Thanks!

Tested-by: Lance Yang <lance.yang@linux.dev>


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v6 3/4] mm: huge_memory: refactor enabled_store() with change_enabled()
  2026-03-11 10:17 ` [PATCH v6 3/4] mm: huge_memory: refactor enabled_store() with change_enabled() Breno Leitao
@ 2026-03-11 12:20   ` Lance Yang
  2026-03-13 22:31   ` Sohil Mehta
  1 sibling, 0 replies; 15+ messages in thread
From: Lance Yang @ 2026-03-11 12:20 UTC (permalink / raw)
  To: Breno Leitao
  Cc: linux-mm, Baolin Wang, Barry Song, Ryan Roberts, Michal Hocko,
	Vlastimil Babka, Suren Baghdasaryan, Zi Yan, Johannes Weiner,
	David Hildenbrand, Liam R. Howlett, Andrew Morton, Mike Rapoport,
	linux-kernel, Brendan Jackman, usamaarif642, kas, kernel-team,
	Lorenzo Stoakes (Oracle), Wei Yang, Dev Jain, Nico Pache,
	Lorenzo Stoakes



On 2026/3/11 18:17, Breno Leitao wrote:
> Refactor enabled_store() to use a new change_enabled() helper.
> Introduce a separate enum global_enabled_mode and
> global_enabled_mode_strings[], mirroring the anon_enabled_mode
> pattern from the previous commit.
> 
> A separate enum is necessary because the global THP setting does
> not support "inherit", only "always", "madvise", and "never".
> Reusing anon_enabled_mode would leave a NULL gap in the string
> array, causing sysfs_match_string() to stop early and fail to
> match entries after the gap.
> 
> The helper uses the same loop pattern as set_anon_enabled_mode(),
> iterating over an array of flag bit positions and using
> __test_and_set_bit()/__test_and_clear_bit() to track whether the state
> actually changed.
> 
> Reviewed-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> Reviewed-by: Wei Yang <richard.weiyang@gmail.com>
> Signed-off-by: Breno Leitao <leitao@debian.org>
> ---

Thanks!

Tested-by: Lance Yang <lance.yang@linux.dev>


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v6 3/4] mm: huge_memory: refactor enabled_store() with change_enabled()
  2026-03-11 10:17 ` [PATCH v6 3/4] mm: huge_memory: refactor enabled_store() with change_enabled() Breno Leitao
  2026-03-11 12:20   ` Lance Yang
@ 2026-03-13 22:31   ` Sohil Mehta
  2026-03-16 10:12     ` Breno Leitao
  1 sibling, 1 reply; 15+ messages in thread
From: Sohil Mehta @ 2026-03-13 22:31 UTC (permalink / raw)
  To: Breno Leitao, Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Zi Yan, Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts,
	Dev Jain, Barry Song, Lance Yang, Vlastimil Babka,
	Suren Baghdasaryan, Michal Hocko, Brendan Jackman,
	Johannes Weiner, Mike Rapoport
  Cc: linux-mm, linux-kernel, usamaarif642, kas, kernel-team,
	Lorenzo Stoakes (Oracle), Wei Yang, Accardi, Kristen C

Hello,

On 3/11/2026 3:17 AM, Breno Leitao wrote:

> +static bool set_global_enabled_mode(enum global_enabled_mode mode)
> +{
> +	static const unsigned long thp_flags[] = {
> +		TRANSPARENT_HUGEPAGE_FLAG,
> +		TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG,
> +	};
> +	enum global_enabled_mode m;
> +	bool changed = false;
> +
> +	for (m = 0; m < ARRAY_SIZE(thp_flags); m++) {
> +		if (m == mode)
> +			changed |= !__test_and_set_bit(thp_flags[m],
> +						       &transparent_hugepage_flags);
> +		else
> +			changed |= __test_and_clear_bit(thp_flags[m],
> +							&transparent_hugepage_flags);

I am not a mm expert and typically do not follow the mm list. Is there
an issue with the usage of non-atomic variants here? The commit message
says this uses the same pattern as set_anon_enabled_mode().

However, set_anon_enabled_mode() has a spinlock=>huge_anon_orders_lock
protecting the access. But, transparent_hugepage_flags seems to be
unprotected in that regard.

IIUC, David's suggestion to use the lockless version was also based on
the use of a lock in that context.
https://lore.kernel.org/all/4f2abf42-983f-4cc2-92f5-c81827e7b7e2@kernel.org/

Note: We ran into this issue while experimenting with an AI review agent
internally. The patch was selected at random from the mailing list. I
analyzed the issue independently and came to the same conclusion. I
apologize if this is a false alarm.


> +	}
> +
> +	return changed;
> +}
> +


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v6 3/4] mm: huge_memory: refactor enabled_store() with change_enabled()
  2026-03-13 22:31   ` Sohil Mehta
@ 2026-03-16 10:12     ` Breno Leitao
  2026-03-16 23:26       ` Sohil Mehta
  0 siblings, 1 reply; 15+ messages in thread
From: Breno Leitao @ 2026-03-16 10:12 UTC (permalink / raw)
  To: Sohil Mehta
  Cc: Andrew Morton, David Hildenbrand, Lorenzo Stoakes, Zi Yan,
	Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
	Barry Song, Lance Yang, Vlastimil Babka, Suren Baghdasaryan,
	Michal Hocko, Brendan Jackman, Johannes Weiner, Mike Rapoport,
	linux-mm, linux-kernel, usamaarif642, kas, kernel-team,
	Lorenzo Stoakes (Oracle), Wei Yang, Accardi, Kristen C

Hello Sohil,

On Fri, Mar 13, 2026 at 03:31:25PM -0700, Sohil Mehta wrote:
> Hello,
> 
> On 3/11/2026 3:17 AM, Breno Leitao wrote:
> 
> > +static bool set_global_enabled_mode(enum global_enabled_mode mode)
> > +{
> > +	static const unsigned long thp_flags[] = {
> > +		TRANSPARENT_HUGEPAGE_FLAG,
> > +		TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG,
> > +	};
> > +	enum global_enabled_mode m;
> > +	bool changed = false;
> > +
> > +	for (m = 0; m < ARRAY_SIZE(thp_flags); m++) {
> > +		if (m == mode)
> > +			changed |= !__test_and_set_bit(thp_flags[m],
> > +						       &transparent_hugepage_flags);
> > +		else
> > +			changed |= __test_and_clear_bit(thp_flags[m],
> > +							&transparent_hugepage_flags);
> 
> I am not a mm expert and typically do not follow the mm list. Is there
> an issue with the usage of non-atomic variants here? The commit message
> says this uses the same pattern as set_anon_enabled_mode().
> 
> However, set_anon_enabled_mode() has a spinlock=>huge_anon_orders_lock
> protecting the access. But, transparent_hugepage_flags seems to be
> unprotected in that regard.

I don't think that the atomic vs non-atomic will not help much, given
this is a compoud operation. Independently if this is atomic or not, it
is racy with anyone changing these fields (transparent_hugepage_flags).
In other words, Atomic ops make each individual bit flip safe, but
set_global_enabled_mode() and defrag_store() need to flip multiple bits
as a group. With atomic ops, two concurrent writers can still interleave
and leave the flags in an invalid state.

That said, Although I don't think this patch is making it worse, I think
the is a racy issue here that we can make better. 

My suggestion is to move the rest of the helpers (defrag_store()) to use
sysfs_match_string(), and then create a thp_flags_lock spinlock to
protect operations against transparent_hugepage_flags. Any concern about
this approach?

Thanks for reviewing it,
--breno


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v6 3/4] mm: huge_memory: refactor enabled_store() with change_enabled()
  2026-03-16 10:12     ` Breno Leitao
@ 2026-03-16 23:26       ` Sohil Mehta
  2026-03-17  9:37         ` Lorenzo Stoakes (Oracle)
  0 siblings, 1 reply; 15+ messages in thread
From: Sohil Mehta @ 2026-03-16 23:26 UTC (permalink / raw)
  To: Breno Leitao
  Cc: Andrew Morton, David Hildenbrand, Lorenzo Stoakes, Zi Yan,
	Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
	Barry Song, Lance Yang, Vlastimil Babka, Suren Baghdasaryan,
	Michal Hocko, Brendan Jackman, Johannes Weiner, Mike Rapoport,
	linux-mm, linux-kernel, usamaarif642, kas, kernel-team,
	Lorenzo Stoakes (Oracle), Wei Yang, Accardi, Kristen C

On 3/16/2026 3:12 AM, Breno Leitao wrote:

>> I am not a mm expert and typically do not follow the mm list. Is there
>> an issue with the usage of non-atomic variants here? The commit message
>> says this uses the same pattern as set_anon_enabled_mode().
>>
>> However, set_anon_enabled_mode() has a spinlock=>huge_anon_orders_lock
>> protecting the access. But, transparent_hugepage_flags seems to be
>> unprotected in that regard.
> 
> I don't think that the atomic vs non-atomic will not help much, given
> this is a compoud operation. Independently if this is atomic or not, it
> is racy with anyone changing these fields (transparent_hugepage_flags).
> In other words, Atomic ops make each individual bit flip safe, but
> set_global_enabled_mode() and defrag_store() need to flip multiple bits
> as a group. With atomic ops, two concurrent writers can still interleave
> and leave the flags in an invalid state.

You are right it is a compound operation. So, there is an existing issue
with two concurrent writers which can leave the flags in an invalid state.

But, I was wondering if there is a slightly different issue now due to
the non-atomic part. Some updates could be lost depending on the timing
of the operation.

For example,

CPU1 does:				CPU2 does:
set_global_enabled_mode("always")	defrag_store("always")

___test_and_set_bit():
// Trying to set bit 1

old = *p
// reads flags, sees defrag bit=0

					set_bit(DEFRAG_DIRECT_FLAG)
					// atomic: sets bit 3
					

*p = old | TRANSPARENT_HUGEPAGE_FLAG;
// writes back old value with bit 1 set
// DEFRAG_DIRECT_FLAG is lost (reverted to 0)

IIUC, this issue didn't exist before. I think it might be safer to use
the atomic test_and_set_bit() to be compatible with the older code.
Though, I'll leave it up to you as I don't have expertise here.

Overall, as you mentioned below, protecting transparent_hugepage_flags
with a spinlock seems like a better, long-term solution to me as well.

> 
> That said, Although I don't think this patch is making it worse, I think
> the is a racy issue here that we can make better. 
> 
> My suggestion is to move the rest of the helpers (defrag_store()) to use
> sysfs_match_string(), and then create a thp_flags_lock spinlock to
> protect operations against transparent_hugepage_flags. Any concern about
> this approach?
> 




^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v6 3/4] mm: huge_memory: refactor enabled_store() with change_enabled()
  2026-03-16 23:26       ` Sohil Mehta
@ 2026-03-17  9:37         ` Lorenzo Stoakes (Oracle)
  2026-03-17 11:23           ` Breno Leitao
  0 siblings, 1 reply; 15+ messages in thread
From: Lorenzo Stoakes (Oracle) @ 2026-03-17  9:37 UTC (permalink / raw)
  To: Sohil Mehta
  Cc: Breno Leitao, Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Zi Yan, Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts,
	Dev Jain, Barry Song, Lance Yang, Vlastimil Babka,
	Suren Baghdasaryan, Michal Hocko, Brendan Jackman,
	Johannes Weiner, Mike Rapoport, linux-mm, linux-kernel,
	usamaarif642, kas, kernel-team, Wei Yang, Accardi, Kristen C

On Mon, Mar 16, 2026 at 04:26:30PM -0700, Sohil Mehta wrote:
> On 3/16/2026 3:12 AM, Breno Leitao wrote:
>
> >> I am not a mm expert and typically do not follow the mm list. Is there
> >> an issue with the usage of non-atomic variants here? The commit message
> >> says this uses the same pattern as set_anon_enabled_mode().
> >>
> >> However, set_anon_enabled_mode() has a spinlock=>huge_anon_orders_lock
> >> protecting the access. But, transparent_hugepage_flags seems to be
> >> unprotected in that regard.
> >
> > I don't think that the atomic vs non-atomic will not help much, given
> > this is a compoud operation. Independently if this is atomic or not, it
> > is racy with anyone changing these fields (transparent_hugepage_flags).
> > In other words, Atomic ops make each individual bit flip safe, but
> > set_global_enabled_mode() and defrag_store() need to flip multiple bits
> > as a group. With atomic ops, two concurrent writers can still interleave
> > and leave the flags in an invalid state.
>
> You are right it is a compound operation. So, there is an existing issue
> with two concurrent writers which can leave the flags in an invalid state.
>
> But, I was wondering if there is a slightly different issue now due to
> the non-atomic part. Some updates could be lost depending on the timing
> of the operation.
>
> For example,
>
> CPU1 does:				CPU2 does:
> set_global_enabled_mode("always")	defrag_store("always")
>
> ___test_and_set_bit():
> // Trying to set bit 1
>
> old = *p
> // reads flags, sees defrag bit=0
>
> 					set_bit(DEFRAG_DIRECT_FLAG)
> 					// atomic: sets bit 3
>
>
> *p = old | TRANSPARENT_HUGEPAGE_FLAG;

> // writes back old value with bit 1 set
> // DEFRAG_DIRECT_FLAG is lost (reverted to 0)
>
> IIUC, this issue didn't exist before. I think it might be safer to use
> the atomic test_and_set_bit() to be compatible with the older code.
> Though, I'll leave it up to you as I don't have expertise here.

No, it's up to the maintainers :)

Given the above I think we should switch back to the atomic accessors.

We can address the broader issues with this horrible code in a separate
series.

>
> Overall, as you mentioned below, protecting transparent_hugepage_flags
> with a spinlock seems like a better, long-term solution to me as well.

Yeah, let's look at doing a follow up that cleans this up in general and
address that then.

>
> >
> > That said, Although I don't think this patch is making it worse, I think
> > the is a racy issue here that we can make better.
> >
> > My suggestion is to move the rest of the helpers (defrag_store()) to use
> > sysfs_match_string(), and then create a thp_flags_lock spinlock to
> > protect operations against transparent_hugepage_flags. Any concern about
> > this approach?
> >
>
>

Cheers, Lorenzo


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v6 3/4] mm: huge_memory: refactor enabled_store() with change_enabled()
  2026-03-17  9:37         ` Lorenzo Stoakes (Oracle)
@ 2026-03-17 11:23           ` Breno Leitao
  2026-03-17 11:25             ` Lorenzo Stoakes (Oracle)
  0 siblings, 1 reply; 15+ messages in thread
From: Breno Leitao @ 2026-03-17 11:23 UTC (permalink / raw)
  To: Lorenzo Stoakes (Oracle)
  Cc: Sohil Mehta, Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Zi Yan, Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts,
	Dev Jain, Barry Song, Lance Yang, Vlastimil Babka,
	Suren Baghdasaryan, Michal Hocko, Brendan Jackman,
	Johannes Weiner, Mike Rapoport, linux-mm, linux-kernel,
	usamaarif642, kas, kernel-team, Wei Yang, Accardi, Kristen C

On Tue, Mar 17, 2026 at 09:37:39AM +0000, Lorenzo Stoakes (Oracle) wrote:
> On Mon, Mar 16, 2026 at 04:26:30PM -0700, Sohil Mehta wrote:
> > On 3/16/2026 3:12 AM, Breno Leitao wrote:
> >
> > >> I am not a mm expert and typically do not follow the mm list. Is there
> > >> an issue with the usage of non-atomic variants here? The commit message
> > >> says this uses the same pattern as set_anon_enabled_mode().
> > >>
> > >> However, set_anon_enabled_mode() has a spinlock=>huge_anon_orders_lock
> > >> protecting the access. But, transparent_hugepage_flags seems to be
> > >> unprotected in that regard.
> > >
> > > I don't think that the atomic vs non-atomic will not help much, given
> > > this is a compoud operation. Independently if this is atomic or not, it
> > > is racy with anyone changing these fields (transparent_hugepage_flags).
> > > In other words, Atomic ops make each individual bit flip safe, but
> > > set_global_enabled_mode() and defrag_store() need to flip multiple bits
> > > as a group. With atomic ops, two concurrent writers can still interleave
> > > and leave the flags in an invalid state.
> >
> > You are right it is a compound operation. So, there is an existing issue
> > with two concurrent writers which can leave the flags in an invalid state.
> >
> > But, I was wondering if there is a slightly different issue now due to
> > the non-atomic part. Some updates could be lost depending on the timing
> > of the operation.
> >
> > For example,
> >
> > CPU1 does:				CPU2 does:
> > set_global_enabled_mode("always")	defrag_store("always")
> >
> > ___test_and_set_bit():
> > // Trying to set bit 1
> >
> > old = *p
> > // reads flags, sees defrag bit=0
> >
> > 					set_bit(DEFRAG_DIRECT_FLAG)
> > 					// atomic: sets bit 3
> >
> >
> > *p = old | TRANSPARENT_HUGEPAGE_FLAG;
> 
> > // writes back old value with bit 1 set
> > // DEFRAG_DIRECT_FLAG is lost (reverted to 0)
> >
> > IIUC, this issue didn't exist before. I think it might be safer to use
> > the atomic test_and_set_bit() to be compatible with the older code.
> > Though, I'll leave it up to you as I don't have expertise here.
> 
> No, it's up to the maintainers :)
> 
> Given the above I think we should switch back to the atomic accessors.
> 
> We can address the broader issues with this horrible code in a separate
> series.

Ack. let me respin then.

> > Overall, as you mentioned below, protecting transparent_hugepage_flags
> > with a spinlock seems like a better, long-term solution to me as well.
> 
> Yeah, let's look at doing a follow up that cleans this up in general and
> address that then.

Sure. I am planning to improve defrag_store() as the next work, and then
come up with this additional spinlock for transparent_hugepage_flags.




^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v6 3/4] mm: huge_memory: refactor enabled_store() with change_enabled()
  2026-03-17 11:23           ` Breno Leitao
@ 2026-03-17 11:25             ` Lorenzo Stoakes (Oracle)
  2026-03-17 11:48               ` Breno Leitao
  0 siblings, 1 reply; 15+ messages in thread
From: Lorenzo Stoakes (Oracle) @ 2026-03-17 11:25 UTC (permalink / raw)
  To: Breno Leitao
  Cc: Sohil Mehta, Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Zi Yan, Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts,
	Dev Jain, Barry Song, Lance Yang, Vlastimil Babka,
	Suren Baghdasaryan, Michal Hocko, Brendan Jackman,
	Johannes Weiner, Mike Rapoport, linux-mm, linux-kernel,
	usamaarif642, kas, kernel-team, Wei Yang, Accardi, Kristen C

On Tue, Mar 17, 2026 at 04:23:37AM -0700, Breno Leitao wrote:
> On Tue, Mar 17, 2026 at 09:37:39AM +0000, Lorenzo Stoakes (Oracle) wrote:
> >
> > Given the above I think we should switch back to the atomic accessors.
> >
> > We can address the broader issues with this horrible code in a separate
> > series.
>
> Ack. let me respin then.

Thanks!

>
> > > Overall, as you mentioned below, protecting transparent_hugepage_flags
> > > with a spinlock seems like a better, long-term solution to me as well.
> >
> > Yeah, let's look at doing a follow up that cleans this up in general and
> > address that then.
>
> Sure. I am planning to improve defrag_store() as the next work, and then
> come up with this additional spinlock for transparent_hugepage_flags.
>

To be clear by the way by 'horrible code' I meant the existing logic with the
globals etc. not your change which is positive and welcome :)

Cheers, Lorenzo


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v6 3/4] mm: huge_memory: refactor enabled_store() with change_enabled()
  2026-03-17 11:25             ` Lorenzo Stoakes (Oracle)
@ 2026-03-17 11:48               ` Breno Leitao
  0 siblings, 0 replies; 15+ messages in thread
From: Breno Leitao @ 2026-03-17 11:48 UTC (permalink / raw)
  To: Lorenzo Stoakes (Oracle)
  Cc: Sohil Mehta, Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Zi Yan, Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts,
	Dev Jain, Barry Song, Lance Yang, Vlastimil Babka,
	Suren Baghdasaryan, Michal Hocko, Brendan Jackman,
	Johannes Weiner, Mike Rapoport, linux-mm, linux-kernel,
	usamaarif642, kas, kernel-team, Wei Yang, Accardi, Kristen C

On Tue, Mar 17, 2026 at 11:25:23AM +0000, Lorenzo Stoakes (Oracle) wrote:
> On Tue, Mar 17, 2026 at 04:23:37AM -0700, Breno Leitao wrote:
> > On Tue, Mar 17, 2026 at 09:37:39AM +0000, Lorenzo Stoakes (Oracle) wrote:
> > >
> > > Given the above I think we should switch back to the atomic accessors.
> > >
> > > We can address the broader issues with this horrible code in a separate
> > > series.
> >
> > Ack. let me respin then.
> 
> Thanks!
> 
> >
> > > > Overall, as you mentioned below, protecting transparent_hugepage_flags
> > > > with a spinlock seems like a better, long-term solution to me as well.
> > >
> > > Yeah, let's look at doing a follow up that cleans this up in general and
> > > address that then.
> >
> > Sure. I am planning to improve defrag_store() as the next work, and then
> > come up with this additional spinlock for transparent_hugepage_flags.
> >
> 
> To be clear by the way by 'horrible code' I meant the existing logic with the
> globals etc. not your change which is positive and welcome :)

lol. Not once did it cross my mind that you might be referring to my
changes. I have never written horrible code in my life. :-P


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v6 2/4] mm: huge_memory: refactor anon_enabled_store() with change_anon_orders()
  2026-03-11 10:17 ` [PATCH v6 2/4] mm: huge_memory: refactor anon_enabled_store() with change_anon_orders() Breno Leitao
  2026-03-11 11:44   ` Lance Yang
@ 2026-03-23  9:00   ` David Hildenbrand (Arm)
  1 sibling, 0 replies; 15+ messages in thread
From: David Hildenbrand (Arm) @ 2026-03-23  9:00 UTC (permalink / raw)
  To: Breno Leitao, Andrew Morton, Lorenzo Stoakes, Zi Yan, Baolin Wang,
	Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain, Barry Song,
	Lance Yang, Vlastimil Babka, Suren Baghdasaryan, Michal Hocko,
	Brendan Jackman, Johannes Weiner, Mike Rapoport
  Cc: linux-mm, linux-kernel, usamaarif642, kas, kernel-team,
	Lorenzo Stoakes (Oracle)

On 3/11/26 11:17, Breno Leitao wrote:
> Consolidate the repeated spin_lock/set_bit/clear_bit pattern in
> anon_enabled_store() into a new change_anon_orders() helper that
> loops over an orders[] array, setting the bit for the selected mode
> and clearing the others.
> 
> Introduce enum anon_enabled_mode and anon_enabled_mode_strings[]
> for the per-order anon THP setting.
> 
> Use sysfs_match_string() with the anon_enabled_mode_strings[] table
> to replace the if/else chain of sysfs_streq() calls.
> 
> The helper uses test_and_set_bit()/test_and_clear_bit() to track
> whether the state actually changed, so start_stop_khugepaged() is
> only called when needed. When the mode is unchanged,
> set_recommended_min_free_kbytes() is called directly to preserve
> the watermark recalculation behavior of the original code.
> 
> Signed-off-by: Breno Leitao <leitao@debian.org>
> Reviewed-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
> ---

Acked-by: David Hildenbrand (Arm) <david@kernel.org>

-- 
Cheers,

David


^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2026-03-23  9:00 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-11 10:17 [PATCH v6 0/4] mm: thp: reduce unnecessary start_stop_khugepaged() calls Breno Leitao
2026-03-11 10:17 ` [PATCH v6 1/4] mm: khugepaged: export set_recommended_min_free_kbytes() Breno Leitao
2026-03-11 10:17 ` [PATCH v6 2/4] mm: huge_memory: refactor anon_enabled_store() with change_anon_orders() Breno Leitao
2026-03-11 11:44   ` Lance Yang
2026-03-23  9:00   ` David Hildenbrand (Arm)
2026-03-11 10:17 ` [PATCH v6 3/4] mm: huge_memory: refactor enabled_store() with change_enabled() Breno Leitao
2026-03-11 12:20   ` Lance Yang
2026-03-13 22:31   ` Sohil Mehta
2026-03-16 10:12     ` Breno Leitao
2026-03-16 23:26       ` Sohil Mehta
2026-03-17  9:37         ` Lorenzo Stoakes (Oracle)
2026-03-17 11:23           ` Breno Leitao
2026-03-17 11:25             ` Lorenzo Stoakes (Oracle)
2026-03-17 11:48               ` Breno Leitao
2026-03-11 10:17 ` [PATCH v6 4/4] mm: ratelimit min_free_kbytes adjustment messages Breno Leitao

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox