From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from out-170.mta0.migadu.com (out-170.mta0.migadu.com [91.218.175.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DB54F38945E for ; Tue, 7 Apr 2026 06:49:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.170 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775544558; cv=none; b=aoOzBeGpAiwU1yTotv+HEZfvJ2cChulmPs0xDrVmXRFxElFsDriVxH2IOlxBnPk1Is8URmnsn/FONgRqLS0DJml3sa/stnj5Zy6/Y5DB4YhQsAlTnRD0TCjkRlowPQ3I0RRd5nArExbIasMP+Uk8dp4POslxakA91mlSJX/11G4= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775544558; c=relaxed/simple; bh=oHcPmbtkV70mDsBIh3RY9piQ2449sxnq3RZ7AVOqvho=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version:Content-Type; b=rjDYsL6XgEjwIJeFx3gZOGGz6yZmNdKAkSKxQi1dOXGQzKdip+uLZ9jjpDQVgQy8ItR6u+C10nAYZgIoMXUTJzA4g5RbH59fW9O6eNhzqXcFmxNaY2waf31H68AUshWentfDJVuvYZqIE9xZRNZ2vfzHm4PA9+I/WN6fC2mOu2k= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=WERdu//K; arc=none smtp.client-ip=91.218.175.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="WERdu//K" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1775544555; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=XZeY293QSxwS94/+8C2N9QAoOiUfZiEZF2lGyKP5AKE=; b=WERdu//KTfuanPQswZOKAavLD5SrF/saVav3K5wcfXSgigtMYYuOALt64YernykprSFtXw i6mFsg7QITLNACQ68Yf6b8XHZqhXqSBB4xHFlQ9UfOi3vFvZVPltkhALZ5vTUoXQl+OyL8 Jt17LZjfw2Dt1Pu2fMhJm5px5C1wyKk= From: Lance Yang To: baolin.wang@linux.alibaba.com Cc: akpm@linux-foundation.org, hughd@google.com, willy@infradead.org, ziy@nvidia.com, david@kernel.org, ljs@kernel.org, lance.yang@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH] mm: shmem: don't set large-order range for internal anonymous shmem mapping Date: Tue, 7 Apr 2026 14:49:03 +0800 Message-Id: <20260407064903.69017-1-lance.yang@linux.dev> In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Migadu-Flow: FLOW_OUT On Tue, Apr 07, 2026 at 02:07:27PM +0800, Baolin Wang wrote: >Anonymous shmem large order allocations are dynamically controlled via the >global THP sysfs knob (/sys/kernel/mm/transparent_hugepage/shmem_enabled) >and the per-size mTHP knobs (/sys/kernel/mm/transparent_hugepage/hugepages-kB/shmem_enabled). > >Therefore, anonymous shmem uses shmem_allowable_huge_orders() to check >which large orders are allowed, rather than relying on mapping_max_folio_order(). >Moreover, mapping_max_folio_order() is intended to control large order >allocations only for tmpfs mounts. Clarify this by not setting a large-order >range for internal anonymous shmem mappings, to avoid confusion, as discussed >in the previous thread[1]. > >[1] https://lore.kernel.org/all/ec927492-4577-4192-8fad-85eb1bb43121@linux.alibaba.com/ >Signed-off-by: Baolin Wang >--- > mm/shmem.c | 13 +++++++++++-- > 1 file changed, 11 insertions(+), 2 deletions(-) > >diff --git a/mm/shmem.c b/mm/shmem.c >index 4ecefe02881d..a60fe067969c 100644 >--- a/mm/shmem.c >+++ b/mm/shmem.c >@@ -3088,8 +3088,17 @@ static struct inode *__shmem_get_inode(struct mnt_idmap *idmap, > if (sbinfo->noswap) > mapping_set_unevictable(inode->i_mapping); > >- /* Don't consider 'deny' for emergencies and 'force' for testing */ >- if (sbinfo->huge) >+ /* >+ * Only set the large order range for tmpfs mounts. The large order >+ * selection for the internal anonymous shmem mount is configured >+ * dynamically via the 'shmem_enabled' interfaces, so there is no >+ * need to set a large order range for the internal anonymous shmem >+ * mapping. >+ * >+ * Note: Don't consider 'deny' for emergencies and 'force' for >+ * testing. >+ */ >+ if (sbinfo->huge && !(sb->s_flags & SB_KERNMOUNT)) FWIW, SB_KERNMOUNT is broader than "internal anonymous shmem" and covers all shm_mnt users too. So maybe "internal shmem mount" would be a better description of what this code is actually checking. > mapping_set_large_folios(inode->i_mapping); > > switch (mode & S_IFMT) { Cheers, Lance