From: Lance Yang <lance.yang@linux.dev>
To: baolin.wang@linux.alibaba.com
Cc: akpm@linux-foundation.org, hughd@google.com, willy@infradead.org,
ziy@nvidia.com, david@kernel.org, ljs@kernel.org,
lance.yang@linux.dev, linux-mm@kvack.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH] mm: shmem: don't set large-order range for internal anonymous shmem mapping
Date: Tue, 7 Apr 2026 14:49:03 +0800 [thread overview]
Message-ID: <20260407064903.69017-1-lance.yang@linux.dev> (raw)
In-Reply-To: <cd5c563db32507c9c090a0ae8287e1851cb0a5d3.1775541661.git.baolin.wang@linux.alibaba.com>
On Tue, Apr 07, 2026 at 02:07:27PM +0800, Baolin Wang wrote:
>Anonymous shmem large order allocations are dynamically controlled via the
>global THP sysfs knob (/sys/kernel/mm/transparent_hugepage/shmem_enabled)
>and the per-size mTHP knobs (/sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/shmem_enabled).
>
>Therefore, anonymous shmem uses shmem_allowable_huge_orders() to check
>which large orders are allowed, rather than relying on mapping_max_folio_order().
>Moreover, mapping_max_folio_order() is intended to control large order
>allocations only for tmpfs mounts. Clarify this by not setting a large-order
>range for internal anonymous shmem mappings, to avoid confusion, as discussed
>in the previous thread[1].
>
>[1] https://lore.kernel.org/all/ec927492-4577-4192-8fad-85eb1bb43121@linux.alibaba.com/
>Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>---
> mm/shmem.c | 13 +++++++++++--
> 1 file changed, 11 insertions(+), 2 deletions(-)
>
>diff --git a/mm/shmem.c b/mm/shmem.c
>index 4ecefe02881d..a60fe067969c 100644
>--- a/mm/shmem.c
>+++ b/mm/shmem.c
>@@ -3088,8 +3088,17 @@ static struct inode *__shmem_get_inode(struct mnt_idmap *idmap,
> if (sbinfo->noswap)
> mapping_set_unevictable(inode->i_mapping);
>
>- /* Don't consider 'deny' for emergencies and 'force' for testing */
>- if (sbinfo->huge)
>+ /*
>+ * Only set the large order range for tmpfs mounts. The large order
>+ * selection for the internal anonymous shmem mount is configured
>+ * dynamically via the 'shmem_enabled' interfaces, so there is no
>+ * need to set a large order range for the internal anonymous shmem
>+ * mapping.
>+ *
>+ * Note: Don't consider 'deny' for emergencies and 'force' for
>+ * testing.
>+ */
>+ if (sbinfo->huge && !(sb->s_flags & SB_KERNMOUNT))
FWIW, SB_KERNMOUNT is broader than "internal anonymous shmem" and covers
all shm_mnt users too.
So maybe "internal shmem mount" would be a better description of what
this code is actually checking.
> mapping_set_large_folios(inode->i_mapping);
>
> switch (mode & S_IFMT) {
Cheers,
Lance
next prev parent reply other threads:[~2026-04-07 6:49 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-07 6:07 [PATCH] mm: shmem: don't set large-order range for internal anonymous shmem mapping Baolin Wang
2026-04-07 6:49 ` Lance Yang [this message]
2026-04-07 7:08 ` Baolin Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260407064903.69017-1-lance.yang@linux.dev \
--to=lance.yang@linux.dev \
--cc=akpm@linux-foundation.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=david@kernel.org \
--cc=hughd@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ljs@kernel.org \
--cc=willy@infradead.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox