From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9C3ED382373; Tue, 21 Apr 2026 06:16:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776752214; cv=none; b=ufuMg09CgSV/plbxdx3QziwAw4YS4jol80rE6r38lHUdORzoNCfV8fusY6nnxm4kCEi5WUD5sefcGrHB6cAvqJXs0fAGPY7gm2awobEVZc9/GbadTFiE5R5iQoxypxdPHzOCxvMbM8R6VaKWRTUIbcRLU86EoUkcGukJGYrIOaA= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776752214; c=relaxed/simple; bh=ZO7LAiTquAEn3PGOA6qwxWEsw7ehhO84HiuDyH/h6ws=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=f2m1zxlvLjIZ8VhlBoLDCs8N2VXy3q5eWhavFxVcJjjd9+6bHsO/Q3ZXtgo+JzcMGta5mljuppalK4z7t7HfDRvJYC41Sq7g2b9e7Sn37HLR1ZfGEMQN9baH7RrdIpy4EHVxClcblKz46q0EhxSub3t+5pJQ5MZoLTMB6/prEEc= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=YgvFXYTH; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="YgvFXYTH" Received: by smtp.kernel.org (Postfix) with ESMTPS id 42154C2BCB9; Tue, 21 Apr 2026 06:16:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1776752214; bh=ZO7LAiTquAEn3PGOA6qwxWEsw7ehhO84HiuDyH/h6ws=; h=From:Date:Subject:References:In-Reply-To:To:Cc:Reply-To:From; b=YgvFXYTHR0DmaIn1RCcAqriG6JhCY8v93lg1qqf2fsD6b49nk/I+XxeYU6aWeQUJ9 j9T9ZSSEH5E82543hoBP4i6mjnOmAoXqOoGJlRyT69mgChJURTX3inTpDmg3yB7dyq Bl6kiHRVPPDNnv6m7phAqFevYR3m1Gaa3bNfF+TanzDLbjxkwvOOPK2rYXy3BPZpLk Ox7LcsUvdBT+s7AUQOLeH3RQE327LWQyaLpMXhvd01e8uaC7Vgk93TKUz9wpRUWfGj oyTzu4zBGmF/8fjmebDnWrG6HigKVcvQoYM2a67hqyRyd1DXS5//VPt93oiRU5HZ3z A9Ru4+Qk8LueQ== Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id 36C39F327AC; Tue, 21 Apr 2026 06:16:54 +0000 (UTC) From: Kairui Song via B4 Relay Date: Tue, 21 Apr 2026 14:16:47 +0800 Subject: [PATCH v3 03/12] mm/huge_memory: move THP gfp limit helper into header Precedence: bulk X-Mailing-List: cgroups@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Message-Id: <20260421-swap-table-p4-v3-3-2f23759a76bc@tencent.com> References: <20260421-swap-table-p4-v3-0-2f23759a76bc@tencent.com> In-Reply-To: <20260421-swap-table-p4-v3-0-2f23759a76bc@tencent.com> To: linux-mm@kvack.org Cc: Andrew Morton , David Hildenbrand , Zi Yan , Baolin Wang , Barry Song , Hugh Dickins , Chris Li , Kemeng Shi , Nhat Pham , Baoquan He , Johannes Weiner , Youngjun Park , Chengming Zhou , Roman Gushchin , Shakeel Butt , Muchun Song , Qi Zheng , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, Kairui Song , Yosry Ahmed , Lorenzo Stoakes , Dev Jain , Lance Yang , Michal Hocko , Michal Hocko , Suren Baghdasaryan , Axel Rasmussen , Lorenzo Stoakes , Yosry Ahmed X-Mailer: b4 0.15.2 X-Developer-Signature: v=1; a=ed25519-sha256; t=1776752211; l=4275; i=kasong@tencent.com; s=kasong-sign-tencent; h=from:subject:message-id; bh=IPzlyWDmmaeqDTMXHOEyi5jkn4VAxM+lz6yxuKwuVaw=; b=GZODeebR8D2/w7Dpfs2u+q8o22LymrwzjCZfIIe3QGXCKa19ZCLUURjYJpT9gzSAORXEgXcbn dnQ5vrpwaBoCax1TgRPyFE8UqBydLM8bA+RWN8n8Pj7Q94reooD9YFT X-Developer-Key: i=kasong@tencent.com; a=ed25519; pk=kCdoBuwrYph+KrkJnrr7Sm1pwwhGDdZKcKrqiK8Y1mI= X-Endpoint-Received: by B4 Relay for kasong@tencent.com/kasong-sign-tencent with auth_id=562 X-Original-From: Kairui Song Reply-To: kasong@tencent.com From: Kairui Song Shmem has some special requirements for THP GFP and has to limit it in certain zones or provide a more lenient fallback. We'll use this helper for generic swap THP allocation, which needs to support shmem. For a typical GFP_HIGHUSER_MOVABLE swap-in, this helper is basically a no-op. But it's necessary for certain shmem users, mostly drivers. No feature change. Signed-off-by: Kairui Song --- include/linux/huge_mm.h | 30 ++++++++++++++++++++++++++++++ mm/shmem.c | 30 +++--------------------------- 2 files changed, 33 insertions(+), 27 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 2949e5acff35..ffe5a120eee4 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -237,6 +237,31 @@ static inline bool thp_vma_suitable_order(struct vm_area_struct *vma, return true; } +/* + * Make sure huge_gfp is always more limited than limit_gfp. + * Some shmem users want THP allocation to be done less aggressively + * and only in certain zone. + */ +static inline gfp_t thp_limit_gfp_mask(gfp_t huge_gfp, gfp_t limit_gfp) +{ + gfp_t allowflags = __GFP_IO | __GFP_FS | __GFP_RECLAIM; + gfp_t denyflags = __GFP_NOWARN | __GFP_NORETRY; + gfp_t zoneflags = limit_gfp & GFP_ZONEMASK; + gfp_t result = huge_gfp & ~(allowflags | GFP_ZONEMASK); + + /* Allow allocations only from the originally specified zones. */ + result |= zoneflags; + + /* + * Minimize the result gfp by taking the union with the deny flags, + * and the intersection of the allow flags. + */ + result |= (limit_gfp & denyflags); + result |= (huge_gfp & limit_gfp) & allowflags; + + return result; +} + /* * Filter the bitfield of input orders to the ones suitable for use in the vma. * See thp_vma_suitable_order(). @@ -581,6 +606,11 @@ static inline bool thp_vma_suitable_order(struct vm_area_struct *vma, return false; } +static inline gfp_t thp_limit_gfp_mask(gfp_t huge_gfp, gfp_t limit_gfp) +{ + return huge_gfp; +} + static inline unsigned long thp_vma_suitable_orders(struct vm_area_struct *vma, unsigned long addr, unsigned long orders) { diff --git a/mm/shmem.c b/mm/shmem.c index 3b5dc21b323c..5916acf594a8 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1791,30 +1791,6 @@ static struct folio *shmem_swapin_cluster(swp_entry_t swap, gfp_t gfp, return folio; } -/* - * Make sure huge_gfp is always more limited than limit_gfp. - * Some of the flags set permissions, while others set limitations. - */ -static gfp_t limit_gfp_mask(gfp_t huge_gfp, gfp_t limit_gfp) -{ - gfp_t allowflags = __GFP_IO | __GFP_FS | __GFP_RECLAIM; - gfp_t denyflags = __GFP_NOWARN | __GFP_NORETRY; - gfp_t zoneflags = limit_gfp & GFP_ZONEMASK; - gfp_t result = huge_gfp & ~(allowflags | GFP_ZONEMASK); - - /* Allow allocations only from the originally specified zones. */ - result |= zoneflags; - - /* - * Minimize the result gfp by taking the union with the deny flags, - * and the intersection of the allow flags. - */ - result |= (limit_gfp & denyflags); - result |= (huge_gfp & limit_gfp) & allowflags; - - return result; -} - #ifdef CONFIG_TRANSPARENT_HUGEPAGE bool shmem_hpage_pmd_enabled(void) { @@ -2065,7 +2041,7 @@ static struct folio *shmem_swap_alloc_folio(struct inode *inode, non_swapcache_batch(entry, nr_pages) != nr_pages) goto fallback; - alloc_gfp = limit_gfp_mask(vma_thp_gfp_mask(vma), gfp); + alloc_gfp = thp_limit_gfp_mask(vma_thp_gfp_mask(vma), gfp); } retry: new = shmem_alloc_folio(alloc_gfp, order, info, index); @@ -2141,7 +2117,7 @@ static int shmem_replace_folio(struct folio **foliop, gfp_t gfp, if (nr_pages > 1) { gfp_t huge_gfp = vma_thp_gfp_mask(vma); - gfp = limit_gfp_mask(huge_gfp, gfp); + gfp = thp_limit_gfp_mask(huge_gfp, gfp); } #endif @@ -2548,7 +2524,7 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index, gfp_t huge_gfp; huge_gfp = vma_thp_gfp_mask(vma); - huge_gfp = limit_gfp_mask(huge_gfp, gfp); + huge_gfp = thp_limit_gfp_mask(huge_gfp, gfp); folio = shmem_alloc_and_add_folio(vmf, huge_gfp, inode, index, fault_mm, orders); if (!IS_ERR(folio)) { -- 2.53.0