From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 188621FCFFC for ; Thu, 14 May 2026 12:48:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778762893; cv=none; b=PyVIumlR+QS7N1xNUu8c0DaFu2YUbTFYPutn8xZRLthi9/ZbNkoTyUTWUCmRaqL05bLPVfIHWJnizbf/8fudV5/7Y8SDZCHCmlxpeevLiJYTTtEVEyqU2RpUZcUf9K5oD8UfUhQ/usImbjmBjpIUK2wTYhCS43tjePYi0UDeThw= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778762893; c=relaxed/simple; bh=G1DkMXq+FUFO53JRzt0zUDWwKVv3YrNEl39sArUxnPc=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=tiuTZCCaanXnUlxlOeBxsDCJjYtGMHeJiZ1F/09bYSHKhOVTzDTLPGMU1B8dPIV6rD4Gi6gfXLkjKDAzt6HIIOhkMUqBknTz5eInzmY2gM9NR6hPpiOysiOBUpX9Pzxh4pdlmjU2La4C6JCMljNTdnvqxuxEu1LP/EHILZn7IyE= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=HjjHoSdE; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="HjjHoSdE" Received: by smtp.kernel.org (Postfix) with ESMTPSA id BF9FCC2BCB8; Thu, 14 May 2026 12:48:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1778762892; bh=G1DkMXq+FUFO53JRzt0zUDWwKVv3YrNEl39sArUxnPc=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=HjjHoSdE7MEJ/k8vq4aPm8JhwINkmMtG/+qvatcE86Cy9PnFdl7OGPk5/orfoqq9U Xo7HrIxReK9pIwLKp4Vb2sAWZqJsVr1ucabigzlh3cwQsxM8eHOj5ZEDUV6fOkz6Ps /IMqllabf8t2AHmzU87sNFlWoPSNrgEHWIXTT3r48iF0u655NMktyFFmYEO5Bx7k4X kRe8Kwd5D0bJDMg8PqGDx01eQwRnpbKbs/gYw7Dmo8kvPICmpxNzbI+4lfF537QnRP 7zNSHcdw966yR3NKQo1uWSfyMl7VN+fa1+FS1jz3dkACeo/gwxPnkL1H4xL/gMERqi Ky3YlSIY5xd0A== Date: Thu, 14 May 2026 13:48:08 +0100 From: Lorenzo Stoakes To: ranxiaokai627@163.com Cc: hughd@google.com, baolin.wang@linux.alibaba.com, akpm@linux-foundation.org, leitao@debian.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, ran.xiaokai@zte.com.cn Subject: Re: [PATCH v2 1/2] mm: huge_memory: refactor thpsize_shmem_enabled_store() with sysfs_match_string() Message-ID: References: <20260513094508.50888-1-ranxiaokai627@163.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20260513094508.50888-1-ranxiaokai627@163.com> On Wed, May 13, 2026 at 09:45:07AM +0000, ranxiaokai627@163.com wrote: > From: Ran Xiaokai > > Inspired by commit 82d9ff648c6c ("mm: huge_memory: refactor > anon_enabled_store() with set_anon_enabled_mode()"), refactor > thpsize_shmem_enabled_store() using sysfs_match_string(). > This eliminates the duplicated spin_lock/unlock(), set/clear_bit(), > calls across all branches, reducing code duplication. You are not mentioning that you're changing the stop/start behaviour. This is a fundamental part of the change that should be documented in the commit message. > > Tested with selftests ./run_kselftest.sh -t mm:ksft_thp.sh, > all test cases passed. > > Signed-off-by: Ran Xiaokai > --- > mm/shmem.c | 94 +++++++++++++++++++++++++++++------------------------- > 1 file changed, 51 insertions(+), 43 deletions(-) > > diff --git a/mm/shmem.c b/mm/shmem.c > index 3b5dc21b323c..60cb10854f11 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -5526,6 +5526,29 @@ static ssize_t shmem_enabled_store(struct kobject *kobj, > struct kobj_attribute shmem_enabled_attr = __ATTR_RW(shmem_enabled); > static DEFINE_SPINLOCK(huge_shmem_orders_lock); > > +enum huge_shmem_enabled_mode { > + HUGE_SHMEM_ENABLED_ALWAYS = 0, > + HUGE_SHMEM_ENABLED_INHERIT, > + HUGE_SHMEM_ENABLED_WITHIN_SIZE, > + HUGE_SHMEM_ENABLED_ADVISE, > + HUGE_SHMEM_ENABLED_NEVER, > +}; > + > +static const char * const huge_shmem_enabled_mode_strings[] = { > + [HUGE_SHMEM_ENABLED_ALWAYS] = "always", > + [HUGE_SHMEM_ENABLED_INHERIT] = "inherit", > + [HUGE_SHMEM_ENABLED_WITHIN_SIZE] = "within_size", > + [HUGE_SHMEM_ENABLED_ADVISE] = "advise", > + [HUGE_SHMEM_ENABLED_NEVER] = "never", > +}; > + > +static unsigned long * const huge_shmem_orders_by_mode[] = { > + [HUGE_SHMEM_ENABLED_ALWAYS] = &huge_shmem_orders_always, > + [HUGE_SHMEM_ENABLED_INHERIT] = &huge_shmem_orders_inherit, > + [HUGE_SHMEM_ENABLED_WITHIN_SIZE] = &huge_shmem_orders_within_size, > + [HUGE_SHMEM_ENABLED_ADVISE] = &huge_shmem_orders_madvise, > +}; I think it's perfectly cromulent to put this within a function rather than as a static value at file scope. See below. > + > static ssize_t thpsize_shmem_enabled_show(struct kobject *kobj, > struct kobj_attribute *attr, char *buf) > { > @@ -5551,57 +5574,42 @@ static ssize_t thpsize_shmem_enabled_store(struct kobject *kobj, > const char *buf, size_t count) > { > int order = to_thpsize(kobj)->order; > + int mode, m; I hate single letter variable names (yes I see set_anon_enabled_mode() has it, I should have reviewed against that, oops :). Please give it its type of enum huge_shmem_enalbed_mode. > ssize_t ret = count; > + int err; This is silly, we don't need ret and err (the ret is not your fault obviously...) On success, we return count, on failure we return err. Drop the ret here. (I think Baolin reviewed similarly :) > + bool changed = false; > > - if (sysfs_streq(buf, "always")) { > - spin_lock(&huge_shmem_orders_lock); > - clear_bit(order, &huge_shmem_orders_inherit); > - clear_bit(order, &huge_shmem_orders_madvise); > - clear_bit(order, &huge_shmem_orders_within_size); > - set_bit(order, &huge_shmem_orders_always); > - spin_unlock(&huge_shmem_orders_lock); > - } else if (sysfs_streq(buf, "inherit")) { > - /* Do not override huge allocation policy with non-PMD sized mTHP */ > - if (shmem_huge == SHMEM_HUGE_FORCE && !is_pmd_order(order)) > - return -EINVAL; > + mode = sysfs_match_string(huge_shmem_enabled_mode_strings, buf); > + if (mode < 0) > + return -EINVAL; I don't know why we filter this error, it returns -EINVAL on error anyway, just return mode in this case. > > - spin_lock(&huge_shmem_orders_lock); > - clear_bit(order, &huge_shmem_orders_always); > - clear_bit(order, &huge_shmem_orders_madvise); > - clear_bit(order, &huge_shmem_orders_within_size); > - set_bit(order, &huge_shmem_orders_inherit); > - spin_unlock(&huge_shmem_orders_lock); > - } else if (sysfs_streq(buf, "within_size")) { > - spin_lock(&huge_shmem_orders_lock); > - clear_bit(order, &huge_shmem_orders_always); > - clear_bit(order, &huge_shmem_orders_inherit); > - clear_bit(order, &huge_shmem_orders_madvise); > - set_bit(order, &huge_shmem_orders_within_size); > - spin_unlock(&huge_shmem_orders_lock); > - } else if (sysfs_streq(buf, "advise")) { > - spin_lock(&huge_shmem_orders_lock); > - clear_bit(order, &huge_shmem_orders_always); > - clear_bit(order, &huge_shmem_orders_inherit); > - clear_bit(order, &huge_shmem_orders_within_size); > - set_bit(order, &huge_shmem_orders_madvise); > - spin_unlock(&huge_shmem_orders_lock); > - } else if (sysfs_streq(buf, "never")) { > - spin_lock(&huge_shmem_orders_lock); > - clear_bit(order, &huge_shmem_orders_always); > - clear_bit(order, &huge_shmem_orders_inherit); > - clear_bit(order, &huge_shmem_orders_within_size); > - clear_bit(order, &huge_shmem_orders_madvise); > - spin_unlock(&huge_shmem_orders_lock); > - } else { > - ret = -EINVAL; > - } > + /* Do not override huge allocation policy with non-PMD sized mTHP */ > + if (mode == HUGE_SHMEM_ENABLED_INHERIT && > + shmem_huge == SHMEM_HUGE_FORCE && !is_pmd_order(order)) > + return -EINVAL; > > - if (ret > 0) { > - int err = start_stop_khugepaged(); > + spin_lock(&huge_shmem_orders_lock); > + for (m = 0; m < ARRAY_SIZE(huge_shmem_orders_by_mode); m++) { > + if (m == mode) > + changed |= !__test_and_set_bit(order, huge_shmem_orders_by_mode[m]); > + else > + changed |= __test_and_clear_bit(order, huge_shmem_orders_by_mode[m]); > + } > + spin_unlock(&huge_shmem_orders_lock); You're copy/pasta'ing anon_enabled_store() but not the nicer refactoring that's there. Please split this out like set_anon_enabled_mode() does. And put: static unsigned long *enabled_orders[] = { &huge_shmem_orders_always, &huge_shmem_orders_inherit, &huge_shmem_orders_within_size, &huge_shmem_orders_madvise, }; At the start of it like that does also. Please don't reproduce the single letter var name though :) > +}; > > + if (changed) { > + err = start_stop_khugepaged(); > if (err) > ret = err; > + } else { > + /* > + * Recalculate watermarks even when the mode hasn't changed > + * to preserve the legacy behavior, as this is always called > + * inside start_stop_khugepaged(). > + */ > + set_recommended_min_free_kbytes(); > } > + > return ret; return count; > } > > -- > 2.25.1 > > Cheers, Lorenzo