* [PATCH 1/2] mm: huge_memory: refactor thpsize_shmem_enabled_store() with sysfs_match_string()
@ 2026-05-12 12:05 ranxiaokai627
2026-05-12 12:05 ` [PATCH 2/2] mm: huge_memory: refactor thpsize_shmem_enabled_show() with helper arrays ranxiaokai627
2026-05-13 3:04 ` [PATCH 1/2] mm: huge_memory: refactor thpsize_shmem_enabled_store() with sysfs_match_string() Baolin Wang
0 siblings, 2 replies; 9+ messages in thread
From: ranxiaokai627 @ 2026-05-12 12:05 UTC (permalink / raw)
To: hughd, baolin.wang, akpm
Cc: linux-mm, linux-kernel, ran.xiaokai, ranxiaokai627
From: Ran Xiaokai <ran.xiaokai@zte.com.cn>
Inspired by commit 82d9ff648c6c ("mm: huge_memory: refactor
anon_enabled_store() with set_anon_enabled_mode()"), refactor
thpsize_shmem_enabled_store() using sysfs_match_string().
This eliminates the duplicated spin_lock/unlock(), set/clear_bit(),
calls across all branches, reducing code duplication.
Tested with selftests ./run_kselftest.sh -t mm:ksft_thp.sh,
all test cases passed.
Signed-off-by: Ran Xiaokai <ran.xiaokai@zte.com.cn>
---
mm/shmem.c | 88 ++++++++++++++++++++++++++----------------------------
1 file changed, 43 insertions(+), 45 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index 3b5dc21b323c..0cc7872cc576 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -5526,6 +5526,29 @@ static ssize_t shmem_enabled_store(struct kobject *kobj,
struct kobj_attribute shmem_enabled_attr = __ATTR_RW(shmem_enabled);
static DEFINE_SPINLOCK(huge_shmem_orders_lock);
+enum huge_shmem_enabled_mode {
+ HUGE_SHMEM_ENABLED_ALWAYS = 0,
+ HUGE_SHMEM_ENABLED_INHERIT,
+ HUGE_SHMEM_ENABLED_WITHIN_SIZE,
+ HUGE_SHMEM_ENABLED_ADVISE,
+ HUGE_SHMEM_ENABLED_NEVER,
+};
+
+static const char * const huge_shmem_enabled_mode_strings[] = {
+ [HUGE_SHMEM_ENABLED_ALWAYS] = "always",
+ [HUGE_SHMEM_ENABLED_INHERIT] = "inherit",
+ [HUGE_SHMEM_ENABLED_WITHIN_SIZE] = "within_size",
+ [HUGE_SHMEM_ENABLED_ADVISE] = "advise",
+ [HUGE_SHMEM_ENABLED_NEVER] = "never",
+};
+
+static unsigned long * const huge_shmem_orders_by_mode[] = {
+ [HUGE_SHMEM_ENABLED_ALWAYS] = &huge_shmem_orders_always,
+ [HUGE_SHMEM_ENABLED_INHERIT] = &huge_shmem_orders_inherit,
+ [HUGE_SHMEM_ENABLED_WITHIN_SIZE] = &huge_shmem_orders_within_size,
+ [HUGE_SHMEM_ENABLED_ADVISE] = &huge_shmem_orders_madvise,
+};
+
static ssize_t thpsize_shmem_enabled_show(struct kobject *kobj,
struct kobj_attribute *attr, char *buf)
{
@@ -5551,57 +5574,32 @@ static ssize_t thpsize_shmem_enabled_store(struct kobject *kobj,
const char *buf, size_t count)
{
int order = to_thpsize(kobj)->order;
+ int mode, m;
ssize_t ret = count;
+ int err;
- if (sysfs_streq(buf, "always")) {
- spin_lock(&huge_shmem_orders_lock);
- clear_bit(order, &huge_shmem_orders_inherit);
- clear_bit(order, &huge_shmem_orders_madvise);
- clear_bit(order, &huge_shmem_orders_within_size);
- set_bit(order, &huge_shmem_orders_always);
- spin_unlock(&huge_shmem_orders_lock);
- } else if (sysfs_streq(buf, "inherit")) {
- /* Do not override huge allocation policy with non-PMD sized mTHP */
- if (shmem_huge == SHMEM_HUGE_FORCE && !is_pmd_order(order))
- return -EINVAL;
+ mode = sysfs_match_string(huge_shmem_enabled_mode_strings, buf);
+ if (mode < 0)
+ return -EINVAL;
- spin_lock(&huge_shmem_orders_lock);
- clear_bit(order, &huge_shmem_orders_always);
- clear_bit(order, &huge_shmem_orders_madvise);
- clear_bit(order, &huge_shmem_orders_within_size);
- set_bit(order, &huge_shmem_orders_inherit);
- spin_unlock(&huge_shmem_orders_lock);
- } else if (sysfs_streq(buf, "within_size")) {
- spin_lock(&huge_shmem_orders_lock);
- clear_bit(order, &huge_shmem_orders_always);
- clear_bit(order, &huge_shmem_orders_inherit);
- clear_bit(order, &huge_shmem_orders_madvise);
- set_bit(order, &huge_shmem_orders_within_size);
- spin_unlock(&huge_shmem_orders_lock);
- } else if (sysfs_streq(buf, "advise")) {
- spin_lock(&huge_shmem_orders_lock);
- clear_bit(order, &huge_shmem_orders_always);
- clear_bit(order, &huge_shmem_orders_inherit);
- clear_bit(order, &huge_shmem_orders_within_size);
- set_bit(order, &huge_shmem_orders_madvise);
- spin_unlock(&huge_shmem_orders_lock);
- } else if (sysfs_streq(buf, "never")) {
- spin_lock(&huge_shmem_orders_lock);
- clear_bit(order, &huge_shmem_orders_always);
- clear_bit(order, &huge_shmem_orders_inherit);
- clear_bit(order, &huge_shmem_orders_within_size);
- clear_bit(order, &huge_shmem_orders_madvise);
- spin_unlock(&huge_shmem_orders_lock);
- } else {
- ret = -EINVAL;
+ /* Do not override huge allocation policy with non-PMD sized mTHP */
+ if (mode == HUGE_SHMEM_ENABLED_INHERIT &&
+ shmem_huge == SHMEM_HUGE_FORCE && !is_pmd_order(order))
+ return -EINVAL;
+
+ spin_lock(&huge_shmem_orders_lock);
+ for (m = 0; m < ARRAY_SIZE(huge_shmem_orders_by_mode); m++) {
+ if (m == mode)
+ set_bit(order, huge_shmem_orders_by_mode[m]);
+ else
+ clear_bit(order, huge_shmem_orders_by_mode[m]);
}
+ spin_unlock(&huge_shmem_orders_lock);
- if (ret > 0) {
- int err = start_stop_khugepaged();
+ err = start_stop_khugepaged();
+ if (err)
+ ret = err;
- if (err)
- ret = err;
- }
return ret;
}
--
2.25.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 2/2] mm: huge_memory: refactor thpsize_shmem_enabled_show() with helper arrays
2026-05-12 12:05 [PATCH 1/2] mm: huge_memory: refactor thpsize_shmem_enabled_store() with sysfs_match_string() ranxiaokai627
@ 2026-05-12 12:05 ` ranxiaokai627
2026-05-13 3:04 ` [PATCH 1/2] mm: huge_memory: refactor thpsize_shmem_enabled_store() with sysfs_match_string() Baolin Wang
1 sibling, 0 replies; 9+ messages in thread
From: ranxiaokai627 @ 2026-05-12 12:05 UTC (permalink / raw)
To: hughd, baolin.wang, akpm
Cc: linux-mm, linux-kernel, ran.xiaokai, ranxiaokai627
From: Ran Xiaokai <ran.xiaokai@zte.com.cn>
Replace the hardcoded if/else chain of test_bit() calls and string
literals in thpsize_shmem_enabled_show() with a loop over
huge_shmem_orders_by_mode[] and huge_shmem_enabled_mode_strings[] arrays.
This makes thpsize_shmem_enabled_show() consistent with
thpsize_shmem_enabled_store() and eliminates duplicated mode name strings.
Signed-off-by: Ran Xiaokai <ran.xiaokai@zte.com.cn>
---
mm/shmem.c | 36 +++++++++++++++++++++++-------------
1 file changed, 23 insertions(+), 13 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index 0cc7872cc576..93fb0a5dca5c 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -5553,20 +5553,30 @@ static ssize_t thpsize_shmem_enabled_show(struct kobject *kobj,
struct kobj_attribute *attr, char *buf)
{
int order = to_thpsize(kobj)->order;
- const char *output;
-
- if (test_bit(order, &huge_shmem_orders_always))
- output = "[always] inherit within_size advise never";
- else if (test_bit(order, &huge_shmem_orders_inherit))
- output = "always [inherit] within_size advise never";
- else if (test_bit(order, &huge_shmem_orders_within_size))
- output = "always inherit [within_size] advise never";
- else if (test_bit(order, &huge_shmem_orders_madvise))
- output = "always inherit within_size [advise] never";
- else
- output = "always inherit within_size advise [never]";
+ int active = HUGE_SHMEM_ENABLED_NEVER;
+ int len = 0;
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(huge_shmem_orders_by_mode); i++) {
+ if (test_bit(order, huge_shmem_orders_by_mode[i])) {
+ active = i;
+ break;
+ }
+ }
+
+ for (i = 0; i < ARRAY_SIZE(huge_shmem_enabled_mode_strings); i++) {
+ if (i == active)
+ len += sysfs_emit_at(buf, len, "[%s] ",
+ huge_shmem_enabled_mode_strings[i]);
+ else
+ len += sysfs_emit_at(buf, len, "%s ",
+ huge_shmem_enabled_mode_strings[i]);
+ }
+
+ /* Replace trailing space with newline */
+ buf[len - 1] = '\n';
- return sysfs_emit(buf, "%s\n", output);
+ return len;
}
static ssize_t thpsize_shmem_enabled_store(struct kobject *kobj,
--
2.25.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH 1/2] mm: huge_memory: refactor thpsize_shmem_enabled_store() with sysfs_match_string()
2026-05-12 12:05 [PATCH 1/2] mm: huge_memory: refactor thpsize_shmem_enabled_store() with sysfs_match_string() ranxiaokai627
2026-05-12 12:05 ` [PATCH 2/2] mm: huge_memory: refactor thpsize_shmem_enabled_show() with helper arrays ranxiaokai627
@ 2026-05-13 3:04 ` Baolin Wang
2026-05-13 8:25 ` ranxiaokai627
2026-05-13 9:03 ` Lorenzo Stoakes
1 sibling, 2 replies; 9+ messages in thread
From: Baolin Wang @ 2026-05-13 3:04 UTC (permalink / raw)
To: ranxiaokai627, hughd, akpm
Cc: linux-mm, linux-kernel, ran.xiaokai,
leitao@debian.org >> Breno Leitao, Lorenzo Stoakes (Oracle)
CC Breno and Lorenzo
On 5/12/26 8:05 PM, ranxiaokai627@163.com wrote:
> From: Ran Xiaokai <ran.xiaokai@zte.com.cn>
>
> Inspired by commit 82d9ff648c6c ("mm: huge_memory: refactor
> anon_enabled_store() with set_anon_enabled_mode()"), refactor
> thpsize_shmem_enabled_store() using sysfs_match_string().
> This eliminates the duplicated spin_lock/unlock(), set/clear_bit(),
> calls across all branches, reducing code duplication.
>
> Tested with selftests ./run_kselftest.sh -t mm:ksft_thp.sh,
> all test cases passed.
>
> Signed-off-by: Ran Xiaokai <ran.xiaokai@zte.com.cn>
> ---
Thanks for doing this.
> mm/shmem.c | 88 ++++++++++++++++++++++++++----------------------------
> 1 file changed, 43 insertions(+), 45 deletions(-)
>
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 3b5dc21b323c..0cc7872cc576 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -5526,6 +5526,29 @@ static ssize_t shmem_enabled_store(struct kobject *kobj,
> struct kobj_attribute shmem_enabled_attr = __ATTR_RW(shmem_enabled);
> static DEFINE_SPINLOCK(huge_shmem_orders_lock);
>
> +enum huge_shmem_enabled_mode {
> + HUGE_SHMEM_ENABLED_ALWAYS = 0,
> + HUGE_SHMEM_ENABLED_INHERIT,
> + HUGE_SHMEM_ENABLED_WITHIN_SIZE,
> + HUGE_SHMEM_ENABLED_ADVISE,
> + HUGE_SHMEM_ENABLED_NEVER,
> +};
> +
> +static const char * const huge_shmem_enabled_mode_strings[] = {
> + [HUGE_SHMEM_ENABLED_ALWAYS] = "always",
> + [HUGE_SHMEM_ENABLED_INHERIT] = "inherit",
> + [HUGE_SHMEM_ENABLED_WITHIN_SIZE] = "within_size",
> + [HUGE_SHMEM_ENABLED_ADVISE] = "advise",
> + [HUGE_SHMEM_ENABLED_NEVER] = "never",
> +};
> +
> +static unsigned long * const huge_shmem_orders_by_mode[] = {
> + [HUGE_SHMEM_ENABLED_ALWAYS] = &huge_shmem_orders_always,
> + [HUGE_SHMEM_ENABLED_INHERIT] = &huge_shmem_orders_inherit,
> + [HUGE_SHMEM_ENABLED_WITHIN_SIZE] = &huge_shmem_orders_within_size,
> + [HUGE_SHMEM_ENABLED_ADVISE] = &huge_shmem_orders_madvise,
> +};
> +
> static ssize_t thpsize_shmem_enabled_show(struct kobject *kobj,
> struct kobj_attribute *attr, char *buf)
> {
> @@ -5551,57 +5574,32 @@ static ssize_t thpsize_shmem_enabled_store(struct kobject *kobj,
> const char *buf, size_t count)
> {
> int order = to_thpsize(kobj)->order;
> + int mode, m;
> ssize_t ret = count;
> + int err;
>
> - if (sysfs_streq(buf, "always")) {
> - spin_lock(&huge_shmem_orders_lock);
> - clear_bit(order, &huge_shmem_orders_inherit);
> - clear_bit(order, &huge_shmem_orders_madvise);
> - clear_bit(order, &huge_shmem_orders_within_size);
> - set_bit(order, &huge_shmem_orders_always);
> - spin_unlock(&huge_shmem_orders_lock);
> - } else if (sysfs_streq(buf, "inherit")) {
> - /* Do not override huge allocation policy with non-PMD sized mTHP */
> - if (shmem_huge == SHMEM_HUGE_FORCE && !is_pmd_order(order))
> - return -EINVAL;
> + mode = sysfs_match_string(huge_shmem_enabled_mode_strings, buf);
> + if (mode < 0)
> + return -EINVAL;
>
> - spin_lock(&huge_shmem_orders_lock);
> - clear_bit(order, &huge_shmem_orders_always);
> - clear_bit(order, &huge_shmem_orders_madvise);
> - clear_bit(order, &huge_shmem_orders_within_size);
> - set_bit(order, &huge_shmem_orders_inherit);
> - spin_unlock(&huge_shmem_orders_lock);
> - } else if (sysfs_streq(buf, "within_size")) {
> - spin_lock(&huge_shmem_orders_lock);
> - clear_bit(order, &huge_shmem_orders_always);
> - clear_bit(order, &huge_shmem_orders_inherit);
> - clear_bit(order, &huge_shmem_orders_madvise);
> - set_bit(order, &huge_shmem_orders_within_size);
> - spin_unlock(&huge_shmem_orders_lock);
> - } else if (sysfs_streq(buf, "advise")) {
> - spin_lock(&huge_shmem_orders_lock);
> - clear_bit(order, &huge_shmem_orders_always);
> - clear_bit(order, &huge_shmem_orders_inherit);
> - clear_bit(order, &huge_shmem_orders_within_size);
> - set_bit(order, &huge_shmem_orders_madvise);
> - spin_unlock(&huge_shmem_orders_lock);
> - } else if (sysfs_streq(buf, "never")) {
> - spin_lock(&huge_shmem_orders_lock);
> - clear_bit(order, &huge_shmem_orders_always);
> - clear_bit(order, &huge_shmem_orders_inherit);
> - clear_bit(order, &huge_shmem_orders_within_size);
> - clear_bit(order, &huge_shmem_orders_madvise);
> - spin_unlock(&huge_shmem_orders_lock);
> - } else {
> - ret = -EINVAL;
> + /* Do not override huge allocation policy with non-PMD sized mTHP */
> + if (mode == HUGE_SHMEM_ENABLED_INHERIT &&
> + shmem_huge == SHMEM_HUGE_FORCE && !is_pmd_order(order))
> + return -EINVAL;
> +
> + spin_lock(&huge_shmem_orders_lock);
> + for (m = 0; m < ARRAY_SIZE(huge_shmem_orders_by_mode); m++) {
> + if (m == mode)
> + set_bit(order, huge_shmem_orders_by_mode[m]);
> + else
> + clear_bit(order, huge_shmem_orders_by_mode[m]);
> }
We are already under the lock, so you can use non-atomic functions like
commit 82d9ff648c6c does: __test_and_set_bit/__test_and_clear_bit.
> + spin_unlock(&huge_shmem_orders_lock);
>
> - if (ret > 0) {
> - int err = start_stop_khugepaged();
> + err = start_stop_khugepaged();
> + if (err)
Moreover, I think we can follow commit 82d9ff648c6c's approach: if
nothing changed, we don't need to call start_stop_khugepaged().
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 1/2] mm: huge_memory: refactor thpsize_shmem_enabled_store() with sysfs_match_string()
2026-05-13 3:04 ` [PATCH 1/2] mm: huge_memory: refactor thpsize_shmem_enabled_store() with sysfs_match_string() Baolin Wang
@ 2026-05-13 8:25 ` ranxiaokai627
2026-05-13 8:56 ` Breno Leitao
2026-05-13 9:03 ` Lorenzo Stoakes
1 sibling, 1 reply; 9+ messages in thread
From: ranxiaokai627 @ 2026-05-13 8:25 UTC (permalink / raw)
To: baolin.wang
Cc: akpm, hughd, leitao, linux-kernel, linux-mm, ljs, ran.xiaokai,
ranxiaokai627
>CC Breno and Lorenzo
>
>On 5/12/26 8:05 PM, ranxiaokai627@163.com wrote:
>> From: Ran Xiaokai <ran.xiaokai@zte.com.cn>
>>
>> Inspired by commit 82d9ff648c6c ("mm: huge_memory: refactor
>> anon_enabled_store() with set_anon_enabled_mode()"), refactor
>> thpsize_shmem_enabled_store() using sysfs_match_string().
>> This eliminates the duplicated spin_lock/unlock(), set/clear_bit(),
>> calls across all branches, reducing code duplication.
>>
>> Tested with selftests ./run_kselftest.sh -t mm:ksft_thp.sh,
>> all test cases passed.
>>
>> Signed-off-by: Ran Xiaokai <ran.xiaokai@zte.com.cn>
>> ---
>
>Thanks for doing this.
>
>> mm/shmem.c | 88 ++++++++++++++++++++++++++----------------------------
>> 1 file changed, 43 insertions(+), 45 deletions(-)
>>
>> diff --git a/mm/shmem.c b/mm/shmem.c
>> index 3b5dc21b323c..0cc7872cc576 100644
>> --- a/mm/shmem.c
>> +++ b/mm/shmem.c
>> @@ -5526,6 +5526,29 @@ static ssize_t shmem_enabled_store(struct kobject *kobj,
>> struct kobj_attribute shmem_enabled_attr = __ATTR_RW(shmem_enabled);
>> static DEFINE_SPINLOCK(huge_shmem_orders_lock);
>>
>
>We are already under the lock, so you can use non-atomic functions like
>commit 82d9ff648c6c does: __test_and_set_bit/__test_and_clear_bit.
>
>> + spin_unlock(&huge_shmem_orders_lock);
>>
>> - if (ret > 0) {
>> - int err = start_stop_khugepaged();
>> + err = start_stop_khugepaged();
>> + if (err)
>
>Moreover, I think we can follow commit 82d9ff648c6c's approach: if
>nothing changed, we don't need to call start_stop_khugepaged().
Thanks for the review.
Yes, indeed, We can further optimize this by following the approach
in commit 82d9ff648c6c.
It seems odd to still call set_recommended_min_free_kbytes() when the
mode hasn't changed. Maybe this is to handle the case where the user
modified /proc/sys/vm/min_free_kbytes in this timewindow?
I'll follow the existing logic and send a v2.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 1/2] mm: huge_memory: refactor thpsize_shmem_enabled_store() with sysfs_match_string()
2026-05-13 8:25 ` ranxiaokai627
@ 2026-05-13 8:56 ` Breno Leitao
0 siblings, 0 replies; 9+ messages in thread
From: Breno Leitao @ 2026-05-13 8:56 UTC (permalink / raw)
To: ranxiaokai627
Cc: baolin.wang, akpm, hughd, linux-kernel, linux-mm, ljs,
ran.xiaokai
On Wed, May 13, 2026 at 08:25:58AM +0000, ranxiaokai627@163.com wrote:
> >On 5/12/26 8:05 PM, ranxiaokai627@163.com wrote:
> >> From: Ran Xiaokai <ran.xiaokai@zte.com.cn>
> >>
> >> Inspired by commit 82d9ff648c6c ("mm: huge_memory: refactor
> >> anon_enabled_store() with set_anon_enabled_mode()"), refactor
> >> thpsize_shmem_enabled_store() using sysfs_match_string().
> >> This eliminates the duplicated spin_lock/unlock(), set/clear_bit(),
> >> calls across all branches, reducing code duplication.
> >>
> >> Tested with selftests ./run_kselftest.sh -t mm:ksft_thp.sh,
> >> all test cases passed.
> >>
> >> Signed-off-by: Ran Xiaokai <ran.xiaokai@zte.com.cn>
> >> ---
> >
> >Thanks for doing this.
Ack. I appreciate you taking the initiative to convert the remaining
instances.
> >Moreover, I think we can follow commit 82d9ff648c6c's approach: if
> >nothing changed, we don't need to call start_stop_khugepaged().
>
> Thanks for the review.
> Yes, indeed, We can further optimize this by following the approach
> in commit 82d9ff648c6c.
Agreed, that would be a good improvement.
> It seems odd to still call set_recommended_min_free_kbytes() when the
> mode hasn't changed. Maybe this is to handle the case where the user
> modified /proc/sys/vm/min_free_kbytes in this timewindow?
You're right that it appears unusual. The rationale is to avoid breaking
userspace for applications that may be 'triggering' this behavior by
rewriting the same value to sysfs. While this isn't an expected use
case, we need to maintain backward compatibility.
The discussion around this took place here, if you're interested in the
context:
https://lore.kernel.org/all/ec07d7f0-cad4-4d9b-8e40-d4ded8170340@lucifer.local/
Please CC me on the next revision.
Thanks,
--breno
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 1/2] mm: huge_memory: refactor thpsize_shmem_enabled_store() with sysfs_match_string()
2026-05-13 3:04 ` [PATCH 1/2] mm: huge_memory: refactor thpsize_shmem_enabled_store() with sysfs_match_string() Baolin Wang
2026-05-13 8:25 ` ranxiaokai627
@ 2026-05-13 9:03 ` Lorenzo Stoakes
2026-05-13 9:38 ` David Hildenbrand (Arm)
2026-05-13 10:09 ` ranxiaokai627
1 sibling, 2 replies; 9+ messages in thread
From: Lorenzo Stoakes @ 2026-05-13 9:03 UTC (permalink / raw)
To: Baolin Wang
Cc: ranxiaokai627, hughd, akpm, linux-mm, linux-kernel, ran.xiaokai,
leitao@debian.org >> Breno Leitao, David Hildenbrand
+cc David also!
Unfortunately we put some hugetlb logic in mm/shmem.c too which makes
get_maintainers.pl not do the right thing.
I wonder if we should add mm/shmem.c to the THP section _also_ just to cover
this off?
On Wed, May 13, 2026 at 11:04:12AM +0800, Baolin Wang wrote:
> CC Breno and Lorenzo
Ran - since you're going to need to do a respin anyway, please cc- Breno, David,
myself and probably worth adding the THP people in general:
R: Zi Yan <ziy@nvidia.com>
R: Baolin Wang <baolin.wang@linux.alibaba.com> (Baolin already cc'd of course :)
R: Liam R. Howlett <liam@infradead.org>
R: Nico Pache <npache@redhat.com>
R: Ryan Roberts <ryan.roberts@arm.com>
R: Dev Jain <dev.jain@arm.com>
R: Barry Song <baohua@kernel.org>
R: Lance Yang <lance.yang@linux.dev>
Thanks!
>
> On 5/12/26 8:05 PM, ranxiaokai627@163.com wrote:
> > From: Ran Xiaokai <ran.xiaokai@zte.com.cn>
> >
> > Inspired by commit 82d9ff648c6c ("mm: huge_memory: refactor
> > anon_enabled_store() with set_anon_enabled_mode()"), refactor
> > thpsize_shmem_enabled_store() using sysfs_match_string().
> > This eliminates the duplicated spin_lock/unlock(), set/clear_bit(),
> > calls across all branches, reducing code duplication.
> >
> > Tested with selftests ./run_kselftest.sh -t mm:ksft_thp.sh,
> > all test cases passed.
> >
> > Signed-off-by: Ran Xiaokai <ran.xiaokai@zte.com.cn>
> > ---
>
> Thanks for doing this.
>
> > mm/shmem.c | 88 ++++++++++++++++++++++++++----------------------------
> > 1 file changed, 43 insertions(+), 45 deletions(-)
> >
> > diff --git a/mm/shmem.c b/mm/shmem.c
> > index 3b5dc21b323c..0cc7872cc576 100644
> > --- a/mm/shmem.c
> > +++ b/mm/shmem.c
> > @@ -5526,6 +5526,29 @@ static ssize_t shmem_enabled_store(struct kobject *kobj,
> > struct kobj_attribute shmem_enabled_attr = __ATTR_RW(shmem_enabled);
> > static DEFINE_SPINLOCK(huge_shmem_orders_lock);
> > +enum huge_shmem_enabled_mode {
> > + HUGE_SHMEM_ENABLED_ALWAYS = 0,
> > + HUGE_SHMEM_ENABLED_INHERIT,
> > + HUGE_SHMEM_ENABLED_WITHIN_SIZE,
> > + HUGE_SHMEM_ENABLED_ADVISE,
> > + HUGE_SHMEM_ENABLED_NEVER,
> > +};
> > +
> > +static const char * const huge_shmem_enabled_mode_strings[] = {
> > + [HUGE_SHMEM_ENABLED_ALWAYS] = "always",
> > + [HUGE_SHMEM_ENABLED_INHERIT] = "inherit",
> > + [HUGE_SHMEM_ENABLED_WITHIN_SIZE] = "within_size",
> > + [HUGE_SHMEM_ENABLED_ADVISE] = "advise",
> > + [HUGE_SHMEM_ENABLED_NEVER] = "never",
> > +};
> > +
> > +static unsigned long * const huge_shmem_orders_by_mode[] = {
> > + [HUGE_SHMEM_ENABLED_ALWAYS] = &huge_shmem_orders_always,
> > + [HUGE_SHMEM_ENABLED_INHERIT] = &huge_shmem_orders_inherit,
> > + [HUGE_SHMEM_ENABLED_WITHIN_SIZE] = &huge_shmem_orders_within_size,
> > + [HUGE_SHMEM_ENABLED_ADVISE] = &huge_shmem_orders_madvise,
> > +};
> > +
> > static ssize_t thpsize_shmem_enabled_show(struct kobject *kobj,
> > struct kobj_attribute *attr, char *buf)
> > {
> > @@ -5551,57 +5574,32 @@ static ssize_t thpsize_shmem_enabled_store(struct kobject *kobj,
> > const char *buf, size_t count)
> > {
> > int order = to_thpsize(kobj)->order;
> > + int mode, m;
> > ssize_t ret = count;
> > + int err;
> > - if (sysfs_streq(buf, "always")) {
> > - spin_lock(&huge_shmem_orders_lock);
> > - clear_bit(order, &huge_shmem_orders_inherit);
> > - clear_bit(order, &huge_shmem_orders_madvise);
> > - clear_bit(order, &huge_shmem_orders_within_size);
> > - set_bit(order, &huge_shmem_orders_always);
> > - spin_unlock(&huge_shmem_orders_lock);
> > - } else if (sysfs_streq(buf, "inherit")) {
> > - /* Do not override huge allocation policy with non-PMD sized mTHP */
> > - if (shmem_huge == SHMEM_HUGE_FORCE && !is_pmd_order(order))
> > - return -EINVAL;
> > + mode = sysfs_match_string(huge_shmem_enabled_mode_strings, buf);
> > + if (mode < 0)
> > + return -EINVAL;
> > - spin_lock(&huge_shmem_orders_lock);
> > - clear_bit(order, &huge_shmem_orders_always);
> > - clear_bit(order, &huge_shmem_orders_madvise);
> > - clear_bit(order, &huge_shmem_orders_within_size);
> > - set_bit(order, &huge_shmem_orders_inherit);
> > - spin_unlock(&huge_shmem_orders_lock);
> > - } else if (sysfs_streq(buf, "within_size")) {
> > - spin_lock(&huge_shmem_orders_lock);
> > - clear_bit(order, &huge_shmem_orders_always);
> > - clear_bit(order, &huge_shmem_orders_inherit);
> > - clear_bit(order, &huge_shmem_orders_madvise);
> > - set_bit(order, &huge_shmem_orders_within_size);
> > - spin_unlock(&huge_shmem_orders_lock);
> > - } else if (sysfs_streq(buf, "advise")) {
> > - spin_lock(&huge_shmem_orders_lock);
> > - clear_bit(order, &huge_shmem_orders_always);
> > - clear_bit(order, &huge_shmem_orders_inherit);
> > - clear_bit(order, &huge_shmem_orders_within_size);
> > - set_bit(order, &huge_shmem_orders_madvise);
> > - spin_unlock(&huge_shmem_orders_lock);
> > - } else if (sysfs_streq(buf, "never")) {
> > - spin_lock(&huge_shmem_orders_lock);
> > - clear_bit(order, &huge_shmem_orders_always);
> > - clear_bit(order, &huge_shmem_orders_inherit);
> > - clear_bit(order, &huge_shmem_orders_within_size);
> > - clear_bit(order, &huge_shmem_orders_madvise);
> > - spin_unlock(&huge_shmem_orders_lock);
> > - } else {
> > - ret = -EINVAL;
> > + /* Do not override huge allocation policy with non-PMD sized mTHP */
> > + if (mode == HUGE_SHMEM_ENABLED_INHERIT &&
> > + shmem_huge == SHMEM_HUGE_FORCE && !is_pmd_order(order))
> > + return -EINVAL;
> > +
> > + spin_lock(&huge_shmem_orders_lock);
> > + for (m = 0; m < ARRAY_SIZE(huge_shmem_orders_by_mode); m++) {
> > + if (m == mode)
> > + set_bit(order, huge_shmem_orders_by_mode[m]);
> > + else
> > + clear_bit(order, huge_shmem_orders_by_mode[m]);
> > }
>
> We are already under the lock, so you can use non-atomic functions like
> commit 82d9ff648c6c does: __test_and_set_bit/__test_and_clear_bit.
Agreed. I guess you're going with the RMW form so we can conditionally figure
out if we need to start_stop_khugepaged()?
>
> > + spin_unlock(&huge_shmem_orders_lock);
> > - if (ret > 0) {
> > - int err = start_stop_khugepaged();
> > + err = start_stop_khugepaged();
> > + if (err)
>
> Moreover, I think we can follow commit 82d9ff648c6c's approach: if nothing
> changed, we don't need to call start_stop_khugepaged().
Agreed.
Cheers, Lorenzo
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 1/2] mm: huge_memory: refactor thpsize_shmem_enabled_store() with sysfs_match_string()
2026-05-13 9:03 ` Lorenzo Stoakes
@ 2026-05-13 9:38 ` David Hildenbrand (Arm)
2026-05-13 9:40 ` Lorenzo Stoakes
2026-05-13 10:09 ` ranxiaokai627
1 sibling, 1 reply; 9+ messages in thread
From: David Hildenbrand (Arm) @ 2026-05-13 9:38 UTC (permalink / raw)
To: Lorenzo Stoakes, Baolin Wang
Cc: ranxiaokai627, hughd, akpm, linux-mm, linux-kernel, ran.xiaokai,
leitao@debian.org >> Breno Leitao
On 5/13/26 11:03, Lorenzo Stoakes wrote:
> +cc David also!
>
> Unfortunately we put some hugetlb logic in mm/shmem.c too which makes
> get_maintainers.pl not do the right thing.
>
> I wonder if we should add mm/shmem.c to the THP section _also_ just to cover
> this off?
I'd rather not. Too much noise :)
--
Cheers,
David
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 1/2] mm: huge_memory: refactor thpsize_shmem_enabled_store() with sysfs_match_string()
2026-05-13 9:38 ` David Hildenbrand (Arm)
@ 2026-05-13 9:40 ` Lorenzo Stoakes
0 siblings, 0 replies; 9+ messages in thread
From: Lorenzo Stoakes @ 2026-05-13 9:40 UTC (permalink / raw)
To: David Hildenbrand (Arm)
Cc: Baolin Wang, ranxiaokai627, hughd, akpm, linux-mm, linux-kernel,
ran.xiaokai, leitao@debian.org >> Breno Leitao
On Wed, May 13, 2026 at 11:38:21AM +0200, David Hildenbrand (Arm) wrote:
> On 5/13/26 11:03, Lorenzo Stoakes wrote:
> > +cc David also!
> >
> > Unfortunately we put some hugetlb logic in mm/shmem.c too which makes
> > get_maintainers.pl not do the right thing.
> >
> > I wonder if we should add mm/shmem.c to the THP section _also_ just to cover
> > this off?
>
> I'd rather not. Too much noise :)
Haha fair enough... maybe we should move this stuff to huge_memory.c or
something, but not sure if that makes sense.
Not a _hugely_ big deal though tbh!
>
> --
> Cheers,
>
> David
Cheers, Lorenzo
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 1/2] mm: huge_memory: refactor thpsize_shmem_enabled_store() with sysfs_match_string()
2026-05-13 9:03 ` Lorenzo Stoakes
2026-05-13 9:38 ` David Hildenbrand (Arm)
@ 2026-05-13 10:09 ` ranxiaokai627
1 sibling, 0 replies; 9+ messages in thread
From: ranxiaokai627 @ 2026-05-13 10:09 UTC (permalink / raw)
To: ljs
Cc: akpm, baolin.wang, david, hughd, leitao, linux-kernel, linux-mm,
ran.xiaokai, ranxiaokai627
> +cc David also!
>
> Unfortunately we put some hugetlb logic in mm/shmem.c too which makes
> get_maintainers.pl not do the right thing.
>
> I wonder if we should add mm/shmem.c to the THP section _also_ just to cover
> this off?
>
> On Wed, May 13, 2026 at 11:04:12AM +0800, Baolin Wang wrote:
> > CC Breno and Lorenzo
>
> Ran - since you're going to need to do a respin anyway, please cc- Breno, David,
> myself and probably worth adding the THP people in general:
>
> R: Zi Yan <ziy@nvidia.com>
> R: Baolin Wang <baolin.wang@linux.alibaba.com> (Baolin already cc'd of course :)
> R: Liam R. Howlett <liam@infradead.org>
> R: Nico Pache <npache@redhat.com>
> R: Ryan Roberts <ryan.roberts@arm.com>
> R: Dev Jain <dev.jain@arm.com>
> R: Barry Song <baohua@kernel.org>
> R: Lance Yang <lance.yang@linux.dev>
>
> Thanks!
Hi Lorenzo,
I just sent out v2 and didn't notice your email in time.
Perhaps I should have waited longer to gather feedback from all reviewers.
I'll wait for the review comments on v2.
If a v3 is needed, I'll cc the THP people as suggested.
Thanks!
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2026-05-13 10:10 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-12 12:05 [PATCH 1/2] mm: huge_memory: refactor thpsize_shmem_enabled_store() with sysfs_match_string() ranxiaokai627
2026-05-12 12:05 ` [PATCH 2/2] mm: huge_memory: refactor thpsize_shmem_enabled_show() with helper arrays ranxiaokai627
2026-05-13 3:04 ` [PATCH 1/2] mm: huge_memory: refactor thpsize_shmem_enabled_store() with sysfs_match_string() Baolin Wang
2026-05-13 8:25 ` ranxiaokai627
2026-05-13 8:56 ` Breno Leitao
2026-05-13 9:03 ` Lorenzo Stoakes
2026-05-13 9:38 ` David Hildenbrand (Arm)
2026-05-13 9:40 ` Lorenzo Stoakes
2026-05-13 10:09 ` ranxiaokai627
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox