From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BFCC63E6DC3 for ; Wed, 13 May 2026 09:03:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778663011; cv=none; b=Fg1y3FCydhVEVAKejjb21TexfCwGRcNfdFwh3tPPEF7mDsMOsz5UwhEMHcL/ixcRq05MWZevfshTPsSo82RCuFZQXlF89lniCGIZXlVbH33VesYQCrSxcO+Vq40elOxzCH258ICTKFnQyVN+zN9p34cc5ogP1qGI/MIdYqnxmhQ= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778663011; c=relaxed/simple; bh=af40/5Zjdy1tSfCHnwVvLm1ZIi3gNr6Oj2YMmr1U13g=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=BZ0/gQxbrLsDhgxdsPhCcH0D8LL+yKb3TI2WuLmirtYCRMb9TS1gOzZC/8AYYxkXQ1iVEWsJJZ7uTlVI6j6eycWwD6aIse0as7U5QS8qQHP6s1AwjHNjkXPKWDlSZ30qfphgrrI26K9l2ZraAo+HP1hPeEE+TwZLV7EgGCqgFhw= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=bVvS9T1c; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="bVvS9T1c" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 832FFC2BCB7; Wed, 13 May 2026 09:03:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1778663010; bh=af40/5Zjdy1tSfCHnwVvLm1ZIi3gNr6Oj2YMmr1U13g=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=bVvS9T1cd0z9kuVHxo2CTpmRpyfVffPG+nKPrqSr831qn0xUWEnHd4ftNJoErk2GR 0G5mYUXLL1EFxF8KTK7/ycFU9cU2+4XTTUuuRC/vkCMEUnyGPc2mReuTru5alUUUiA oENyBcyxHU2RbEAGYRA6wB/gg3RTlpm61qJpNUEmZga/T5DUIE+Aa/6hS+WJ7uNz7K akL/R51Nqh5BBeJ1LWnPym5YN+r5xCe3gvZ0X0iLa6j6amEDFtuduL10+OtWIylut4 CAdm5Yaol8ENGbI2/vO8veatdcKJn1Ewp9+lpNYjJ3afKvjMxj/J8maTbpzEdvgXw1 svPLAAAHm9Htg== Date: Wed, 13 May 2026 10:03:25 +0100 From: Lorenzo Stoakes To: Baolin Wang Cc: ranxiaokai627@163.com, hughd@google.com, akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, ran.xiaokai@zte.com.cn, "leitao@debian.org >> Breno Leitao" , David Hildenbrand Subject: Re: [PATCH 1/2] mm: huge_memory: refactor thpsize_shmem_enabled_store() with sysfs_match_string() Message-ID: References: <20260512120557.49995-1-ranxiaokai627@163.com> <7b8cb44c-a098-4430-a8ef-142fa7a19087@linux.alibaba.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <7b8cb44c-a098-4430-a8ef-142fa7a19087@linux.alibaba.com> +cc David also! Unfortunately we put some hugetlb logic in mm/shmem.c too which makes get_maintainers.pl not do the right thing. I wonder if we should add mm/shmem.c to the THP section _also_ just to cover this off? On Wed, May 13, 2026 at 11:04:12AM +0800, Baolin Wang wrote: > CC Breno and Lorenzo Ran - since you're going to need to do a respin anyway, please cc- Breno, David, myself and probably worth adding the THP people in general: R: Zi Yan R: Baolin Wang (Baolin already cc'd of course :) R: Liam R. Howlett R: Nico Pache R: Ryan Roberts R: Dev Jain R: Barry Song R: Lance Yang Thanks! > > On 5/12/26 8:05 PM, ranxiaokai627@163.com wrote: > > From: Ran Xiaokai > > > > Inspired by commit 82d9ff648c6c ("mm: huge_memory: refactor > > anon_enabled_store() with set_anon_enabled_mode()"), refactor > > thpsize_shmem_enabled_store() using sysfs_match_string(). > > This eliminates the duplicated spin_lock/unlock(), set/clear_bit(), > > calls across all branches, reducing code duplication. > > > > Tested with selftests ./run_kselftest.sh -t mm:ksft_thp.sh, > > all test cases passed. > > > > Signed-off-by: Ran Xiaokai > > --- > > Thanks for doing this. > > > mm/shmem.c | 88 ++++++++++++++++++++++++++---------------------------- > > 1 file changed, 43 insertions(+), 45 deletions(-) > > > > diff --git a/mm/shmem.c b/mm/shmem.c > > index 3b5dc21b323c..0cc7872cc576 100644 > > --- a/mm/shmem.c > > +++ b/mm/shmem.c > > @@ -5526,6 +5526,29 @@ static ssize_t shmem_enabled_store(struct kobject *kobj, > > struct kobj_attribute shmem_enabled_attr = __ATTR_RW(shmem_enabled); > > static DEFINE_SPINLOCK(huge_shmem_orders_lock); > > +enum huge_shmem_enabled_mode { > > + HUGE_SHMEM_ENABLED_ALWAYS = 0, > > + HUGE_SHMEM_ENABLED_INHERIT, > > + HUGE_SHMEM_ENABLED_WITHIN_SIZE, > > + HUGE_SHMEM_ENABLED_ADVISE, > > + HUGE_SHMEM_ENABLED_NEVER, > > +}; > > + > > +static const char * const huge_shmem_enabled_mode_strings[] = { > > + [HUGE_SHMEM_ENABLED_ALWAYS] = "always", > > + [HUGE_SHMEM_ENABLED_INHERIT] = "inherit", > > + [HUGE_SHMEM_ENABLED_WITHIN_SIZE] = "within_size", > > + [HUGE_SHMEM_ENABLED_ADVISE] = "advise", > > + [HUGE_SHMEM_ENABLED_NEVER] = "never", > > +}; > > + > > +static unsigned long * const huge_shmem_orders_by_mode[] = { > > + [HUGE_SHMEM_ENABLED_ALWAYS] = &huge_shmem_orders_always, > > + [HUGE_SHMEM_ENABLED_INHERIT] = &huge_shmem_orders_inherit, > > + [HUGE_SHMEM_ENABLED_WITHIN_SIZE] = &huge_shmem_orders_within_size, > > + [HUGE_SHMEM_ENABLED_ADVISE] = &huge_shmem_orders_madvise, > > +}; > > + > > static ssize_t thpsize_shmem_enabled_show(struct kobject *kobj, > > struct kobj_attribute *attr, char *buf) > > { > > @@ -5551,57 +5574,32 @@ static ssize_t thpsize_shmem_enabled_store(struct kobject *kobj, > > const char *buf, size_t count) > > { > > int order = to_thpsize(kobj)->order; > > + int mode, m; > > ssize_t ret = count; > > + int err; > > - if (sysfs_streq(buf, "always")) { > > - spin_lock(&huge_shmem_orders_lock); > > - clear_bit(order, &huge_shmem_orders_inherit); > > - clear_bit(order, &huge_shmem_orders_madvise); > > - clear_bit(order, &huge_shmem_orders_within_size); > > - set_bit(order, &huge_shmem_orders_always); > > - spin_unlock(&huge_shmem_orders_lock); > > - } else if (sysfs_streq(buf, "inherit")) { > > - /* Do not override huge allocation policy with non-PMD sized mTHP */ > > - if (shmem_huge == SHMEM_HUGE_FORCE && !is_pmd_order(order)) > > - return -EINVAL; > > + mode = sysfs_match_string(huge_shmem_enabled_mode_strings, buf); > > + if (mode < 0) > > + return -EINVAL; > > - spin_lock(&huge_shmem_orders_lock); > > - clear_bit(order, &huge_shmem_orders_always); > > - clear_bit(order, &huge_shmem_orders_madvise); > > - clear_bit(order, &huge_shmem_orders_within_size); > > - set_bit(order, &huge_shmem_orders_inherit); > > - spin_unlock(&huge_shmem_orders_lock); > > - } else if (sysfs_streq(buf, "within_size")) { > > - spin_lock(&huge_shmem_orders_lock); > > - clear_bit(order, &huge_shmem_orders_always); > > - clear_bit(order, &huge_shmem_orders_inherit); > > - clear_bit(order, &huge_shmem_orders_madvise); > > - set_bit(order, &huge_shmem_orders_within_size); > > - spin_unlock(&huge_shmem_orders_lock); > > - } else if (sysfs_streq(buf, "advise")) { > > - spin_lock(&huge_shmem_orders_lock); > > - clear_bit(order, &huge_shmem_orders_always); > > - clear_bit(order, &huge_shmem_orders_inherit); > > - clear_bit(order, &huge_shmem_orders_within_size); > > - set_bit(order, &huge_shmem_orders_madvise); > > - spin_unlock(&huge_shmem_orders_lock); > > - } else if (sysfs_streq(buf, "never")) { > > - spin_lock(&huge_shmem_orders_lock); > > - clear_bit(order, &huge_shmem_orders_always); > > - clear_bit(order, &huge_shmem_orders_inherit); > > - clear_bit(order, &huge_shmem_orders_within_size); > > - clear_bit(order, &huge_shmem_orders_madvise); > > - spin_unlock(&huge_shmem_orders_lock); > > - } else { > > - ret = -EINVAL; > > + /* Do not override huge allocation policy with non-PMD sized mTHP */ > > + if (mode == HUGE_SHMEM_ENABLED_INHERIT && > > + shmem_huge == SHMEM_HUGE_FORCE && !is_pmd_order(order)) > > + return -EINVAL; > > + > > + spin_lock(&huge_shmem_orders_lock); > > + for (m = 0; m < ARRAY_SIZE(huge_shmem_orders_by_mode); m++) { > > + if (m == mode) > > + set_bit(order, huge_shmem_orders_by_mode[m]); > > + else > > + clear_bit(order, huge_shmem_orders_by_mode[m]); > > } > > We are already under the lock, so you can use non-atomic functions like > commit 82d9ff648c6c does: __test_and_set_bit/__test_and_clear_bit. Agreed. I guess you're going with the RMW form so we can conditionally figure out if we need to start_stop_khugepaged()? > > > + spin_unlock(&huge_shmem_orders_lock); > > - if (ret > 0) { > > - int err = start_stop_khugepaged(); > > + err = start_stop_khugepaged(); > > + if (err) > > Moreover, I think we can follow commit 82d9ff648c6c's approach: if nothing > changed, we don't need to call start_stop_khugepaged(). Agreed. Cheers, Lorenzo