Linux-mm Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Lorenzo Stoakes <ljs@kernel.org>
To: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: ranxiaokai627@163.com, hughd@google.com,
	akpm@linux-foundation.org,  linux-mm@kvack.org,
	linux-kernel@vger.kernel.org, ran.xiaokai@zte.com.cn,
	 "leitao@debian.org >> Breno Leitao" <leitao@debian.org>,
	David Hildenbrand <david@kernel.org>
Subject: Re: [PATCH 1/2] mm: huge_memory: refactor thpsize_shmem_enabled_store() with sysfs_match_string()
Date: Wed, 13 May 2026 10:03:25 +0100	[thread overview]
Message-ID: <agQ9FXDr_5ybD7AR@lucifer> (raw)
In-Reply-To: <7b8cb44c-a098-4430-a8ef-142fa7a19087@linux.alibaba.com>

+cc David also!

Unfortunately we put some hugetlb logic in mm/shmem.c too which makes
get_maintainers.pl not do the right thing.

I wonder if we should add mm/shmem.c to the THP section _also_ just to cover
this off?

On Wed, May 13, 2026 at 11:04:12AM +0800, Baolin Wang wrote:
> CC Breno and Lorenzo

Ran - since you're going to need to do a respin anyway, please cc- Breno, David,
myself and probably worth adding the THP people in general:

R:	Zi Yan <ziy@nvidia.com>
R:	Baolin Wang <baolin.wang@linux.alibaba.com> (Baolin already cc'd of course :)
R:	Liam R. Howlett <liam@infradead.org>
R:	Nico Pache <npache@redhat.com>
R:	Ryan Roberts <ryan.roberts@arm.com>
R:	Dev Jain <dev.jain@arm.com>
R:	Barry Song <baohua@kernel.org>
R:	Lance Yang <lance.yang@linux.dev>

Thanks!

>
> On 5/12/26 8:05 PM, ranxiaokai627@163.com wrote:
> > From: Ran Xiaokai <ran.xiaokai@zte.com.cn>
> >
> > Inspired by commit 82d9ff648c6c ("mm: huge_memory: refactor
> > anon_enabled_store() with set_anon_enabled_mode()"), refactor
> > thpsize_shmem_enabled_store() using sysfs_match_string().
> > This eliminates the duplicated spin_lock/unlock(), set/clear_bit(),
> > calls across all branches, reducing code duplication.
> >
> > Tested with selftests ./run_kselftest.sh -t mm:ksft_thp.sh,
> > all test cases passed.
> >
> > Signed-off-by: Ran Xiaokai <ran.xiaokai@zte.com.cn>
> > ---
>
> Thanks for doing this.
>
> >   mm/shmem.c | 88 ++++++++++++++++++++++++++----------------------------
> >   1 file changed, 43 insertions(+), 45 deletions(-)
> >
> > diff --git a/mm/shmem.c b/mm/shmem.c
> > index 3b5dc21b323c..0cc7872cc576 100644
> > --- a/mm/shmem.c
> > +++ b/mm/shmem.c
> > @@ -5526,6 +5526,29 @@ static ssize_t shmem_enabled_store(struct kobject *kobj,
> >   struct kobj_attribute shmem_enabled_attr = __ATTR_RW(shmem_enabled);
> >   static DEFINE_SPINLOCK(huge_shmem_orders_lock);
> > +enum huge_shmem_enabled_mode {
> > +	HUGE_SHMEM_ENABLED_ALWAYS = 0,
> > +	HUGE_SHMEM_ENABLED_INHERIT,
> > +	HUGE_SHMEM_ENABLED_WITHIN_SIZE,
> > +	HUGE_SHMEM_ENABLED_ADVISE,
> > +	HUGE_SHMEM_ENABLED_NEVER,
> > +};
> > +
> > +static const char * const huge_shmem_enabled_mode_strings[] = {
> > +	[HUGE_SHMEM_ENABLED_ALWAYS]      = "always",
> > +	[HUGE_SHMEM_ENABLED_INHERIT]     = "inherit",
> > +	[HUGE_SHMEM_ENABLED_WITHIN_SIZE] = "within_size",
> > +	[HUGE_SHMEM_ENABLED_ADVISE]      = "advise",
> > +	[HUGE_SHMEM_ENABLED_NEVER]       = "never",
> > +};
> > +
> > +static unsigned long * const huge_shmem_orders_by_mode[] = {
> > +	[HUGE_SHMEM_ENABLED_ALWAYS]      = &huge_shmem_orders_always,
> > +	[HUGE_SHMEM_ENABLED_INHERIT]     = &huge_shmem_orders_inherit,
> > +	[HUGE_SHMEM_ENABLED_WITHIN_SIZE] = &huge_shmem_orders_within_size,
> > +	[HUGE_SHMEM_ENABLED_ADVISE]      = &huge_shmem_orders_madvise,
> > +};
> > +
> >   static ssize_t thpsize_shmem_enabled_show(struct kobject *kobj,
> >   					  struct kobj_attribute *attr, char *buf)
> >   {
> > @@ -5551,57 +5574,32 @@ static ssize_t thpsize_shmem_enabled_store(struct kobject *kobj,
> >   					   const char *buf, size_t count)
> >   {
> >   	int order = to_thpsize(kobj)->order;
> > +	int mode, m;
> >   	ssize_t ret = count;
> > +	int err;
> > -	if (sysfs_streq(buf, "always")) {
> > -		spin_lock(&huge_shmem_orders_lock);
> > -		clear_bit(order, &huge_shmem_orders_inherit);
> > -		clear_bit(order, &huge_shmem_orders_madvise);
> > -		clear_bit(order, &huge_shmem_orders_within_size);
> > -		set_bit(order, &huge_shmem_orders_always);
> > -		spin_unlock(&huge_shmem_orders_lock);
> > -	} else if (sysfs_streq(buf, "inherit")) {
> > -		/* Do not override huge allocation policy with non-PMD sized mTHP */
> > -		if (shmem_huge == SHMEM_HUGE_FORCE && !is_pmd_order(order))
> > -			return -EINVAL;
> > +	mode = sysfs_match_string(huge_shmem_enabled_mode_strings, buf);
> > +	if (mode < 0)
> > +		return -EINVAL;
> > -		spin_lock(&huge_shmem_orders_lock);
> > -		clear_bit(order, &huge_shmem_orders_always);
> > -		clear_bit(order, &huge_shmem_orders_madvise);
> > -		clear_bit(order, &huge_shmem_orders_within_size);
> > -		set_bit(order, &huge_shmem_orders_inherit);
> > -		spin_unlock(&huge_shmem_orders_lock);
> > -	} else if (sysfs_streq(buf, "within_size")) {
> > -		spin_lock(&huge_shmem_orders_lock);
> > -		clear_bit(order, &huge_shmem_orders_always);
> > -		clear_bit(order, &huge_shmem_orders_inherit);
> > -		clear_bit(order, &huge_shmem_orders_madvise);
> > -		set_bit(order, &huge_shmem_orders_within_size);
> > -		spin_unlock(&huge_shmem_orders_lock);
> > -	} else if (sysfs_streq(buf, "advise")) {
> > -		spin_lock(&huge_shmem_orders_lock);
> > -		clear_bit(order, &huge_shmem_orders_always);
> > -		clear_bit(order, &huge_shmem_orders_inherit);
> > -		clear_bit(order, &huge_shmem_orders_within_size);
> > -		set_bit(order, &huge_shmem_orders_madvise);
> > -		spin_unlock(&huge_shmem_orders_lock);
> > -	} else if (sysfs_streq(buf, "never")) {
> > -		spin_lock(&huge_shmem_orders_lock);
> > -		clear_bit(order, &huge_shmem_orders_always);
> > -		clear_bit(order, &huge_shmem_orders_inherit);
> > -		clear_bit(order, &huge_shmem_orders_within_size);
> > -		clear_bit(order, &huge_shmem_orders_madvise);
> > -		spin_unlock(&huge_shmem_orders_lock);
> > -	} else {
> > -		ret = -EINVAL;
> > +	/* Do not override huge allocation policy with non-PMD sized mTHP */
> > +	if (mode == HUGE_SHMEM_ENABLED_INHERIT &&
> > +		shmem_huge == SHMEM_HUGE_FORCE && !is_pmd_order(order))
> > +		return -EINVAL;
> > +
> > +	spin_lock(&huge_shmem_orders_lock);
> > +	for (m = 0; m < ARRAY_SIZE(huge_shmem_orders_by_mode); m++) {
> > +		if (m == mode)
> > +			set_bit(order, huge_shmem_orders_by_mode[m]);
> > +		else
> > +			clear_bit(order, huge_shmem_orders_by_mode[m]);
> >   	}
>
> We are already under the lock, so you can use non-atomic functions like
> commit 82d9ff648c6c does: __test_and_set_bit/__test_and_clear_bit.

Agreed. I guess you're going with the RMW form so we can conditionally figure
out if we need to start_stop_khugepaged()?

>
> > +	spin_unlock(&huge_shmem_orders_lock);
> > -	if (ret > 0) {
> > -		int err = start_stop_khugepaged();
> > +	err = start_stop_khugepaged();
> > +	if (err)
>
> Moreover, I think we can follow commit 82d9ff648c6c's approach: if nothing
> changed, we don't need to call start_stop_khugepaged().

Agreed.

Cheers, Lorenzo


  parent reply	other threads:[~2026-05-13  9:03 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-05-12 12:05 [PATCH 1/2] mm: huge_memory: refactor thpsize_shmem_enabled_store() with sysfs_match_string() ranxiaokai627
2026-05-12 12:05 ` [PATCH 2/2] mm: huge_memory: refactor thpsize_shmem_enabled_show() with helper arrays ranxiaokai627
2026-05-13  3:04 ` [PATCH 1/2] mm: huge_memory: refactor thpsize_shmem_enabled_store() with sysfs_match_string() Baolin Wang
2026-05-13  8:25   ` ranxiaokai627
2026-05-13  8:56     ` Breno Leitao
2026-05-13  9:03   ` Lorenzo Stoakes [this message]
2026-05-13  9:38     ` David Hildenbrand (Arm)
2026-05-13  9:40       ` Lorenzo Stoakes
2026-05-13 10:09     ` ranxiaokai627

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=agQ9FXDr_5ybD7AR@lucifer \
    --to=ljs@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=david@kernel.org \
    --cc=hughd@google.com \
    --cc=leitao@debian.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=ran.xiaokai@zte.com.cn \
    --cc=ranxiaokai627@163.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox