From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3441A3CA4BB for ; Sun, 3 May 2026 13:21:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777814474; cv=none; b=FqHvxCLlsgLRFMJH4HuaSZFWT169Yw65u9lm55cyJ89TIxOCBaa3IGle0Bf0CQ5CJdLbZ3FJCIN0ev8Cg0bJZkAGu8qgVvViNd896Gn1HWY+T4lmWOEoDOBWZd40zSjU190v4daTqC0deZua2vpvcqzJBL9syvz/ogwsRMz+Au8= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777814474; c=relaxed/simple; bh=IdGfqC2v3t0oB7s5FLqdr3fzT31oFKLzKD6MYpRsAuU=; h=Date:To:From:Subject:Message-Id; b=YSY6+AdX2GiL29t/PCC4lixDCJS7txFgg/zffqWaBVxwr1yrb1zlQZsIxVdGg7V8aFPS6ep8YNQhmxURUN6nwgq82DVZM/VXhOyo/7atFmHNzdYrjmUq9bezXTYGFkqGGTDWfo25jxVL4F433MXq5pxQPsL3kJQoMdOxCKENN8Q= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux-foundation.org header.i=@linux-foundation.org header.b=S9UkhvpV; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux-foundation.org header.i=@linux-foundation.org header.b="S9UkhvpV" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 761F2C2BCC4; Sun, 3 May 2026 13:21:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1777814474; bh=IdGfqC2v3t0oB7s5FLqdr3fzT31oFKLzKD6MYpRsAuU=; h=Date:To:From:Subject:From; b=S9UkhvpV0AyHgIYoWTzvcMr98teabDaRI5/2+KyO9cdO0p3fDaJERL4T2UZO/kF2n hzQXh98QkbYjWIeYbXgkYpF65JbMkzPbEmSrNTeob1WlvEuJPTk95uLNpeGPkk2A78 SogFna6uk5yK30E++Z+ZiCovkfLtLP8a+e9BBQHc= Date: Sun, 03 May 2026 06:21:11 -0700 To: mm-commits@vger.kernel.org,zokeefe@google.com,ziy@nvidia.com,ying.huang@linux.alibaba.com,yang@os.amperecomputing.com,willy@infradead.org,will@kernel.org,wangkefeng.wang@huawei.com,vishal.moola@gmail.com,vbabka@suse.cz,usama.arif@linux.dev,tiwai@suse.de,thomas.hellstrom@linux.intel.com,surenb@google.com,sunnanyong@huawei.com,shivankg@amd.com,ryan.roberts@arm.com,rppt@kernel.org,rostedt@goodmis.org,rientjes@google.com,richard.weiyang@gmail.com,rdunlap@infradead.org,raquini@redhat.com,rakie.kim@sk.com,pfalcato@suse.de,peterx@redhat.com,npache@redhat.com,mhocko@suse.com,mhiramat@kernel.org,matthew.brost@intel.com,mathieu.desnoyers@efficios.com,ljs@kernel.org,liam@infradead.org,lance.yang@linux.dev,joshua.hahnjy@gmail.com,jannh@google.com,jack@suse.cz,jackmanb@google.com,hughd@google.com,hannes@cmpxchg.org,gourry@gourry.net,dev.jain@arm.com,david@kernel.org,corbet@lwn.net,catalin.marinas@arm.com,byungchul@sk.com,baohua@kernel.org,bagasdotme@gmail.com,apopple@nvidia.com,anshuman.khandual@arm.com,aarcange@redhat.com,baolin.wang@linux.alibaba.com,akpm@linux-foundation.org From: Andrew Morton Subject: [to-be-updated] mm-khugepaged-run-khugepaged-for-all-orders.patch removed from -mm tree Message-Id: <20260503132113.761F2C2BCC4@smtp.kernel.org> Precedence: bulk X-Mailing-List: mm-commits@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: The quilt patch titled Subject: mm/khugepaged: run khugepaged for all orders has been removed from the -mm tree. Its filename was mm-khugepaged-run-khugepaged-for-all-orders.patch This patch was dropped because an updated version will be issued ------------------------------------------------------ From: Baolin Wang Subject: mm/khugepaged: run khugepaged for all orders Date: Sun, 19 Apr 2026 12:57:49 -0600 If any order (m)THP is enabled we should allow running khugepaged to attempt scanning and collapsing mTHPs. In order for khugepaged to operate when only mTHP sizes are specified in sysfs, we must modify the predicate function that determines whether it ought to run to do so. This function is currently called hugepage_pmd_enabled(), this patch renames it to hugepage_enabled() and updates the logic to check to determine whether any valid orders may exist which would justify khugepaged running. We must also update collapse_allowable_orders() to check all orders if the vma is anonymous and the collapse is khugepaged. After this patch khugepaged mTHP collapse is fully enabled. Link: https://lore.kernel.org/20260419185750.260784-13-npache@redhat.com Signed-off-by: Baolin Wang Signed-off-by: Nico Pache Reviewed-by: Lorenzo Stoakes Reviewed-by: Lance Yang Acked-by: Usama Arif Acked-by: David Hildenbrand (Arm) Cc: Alistair Popple Cc: Andrea Arcangeli Cc: Anshuman Khandual Cc: Bagas Sanjaya Cc: Barry Song Cc: Brendan Jackman Cc: Byungchul Park Cc: Catalin Marinas Cc: David Rientjes Cc: Dev Jain Cc: Gregory Price Cc: "Huang, Ying" Cc: Hugh Dickins Cc: Jan Kara Cc: Jann Horn Cc: Johannes Weiner Cc: Jonathan Corbet Cc: Joshua Hahn Cc: Kefeng Wang Cc: Liam Howlett Cc: "Masami Hiramatsu (Google)" Cc: Mathieu Desnoyers Cc: Matthew Brost Cc: Matthew Wilcox (Oracle) Cc: Michal Hocko Cc: Mike Rapoport Cc: Nanyong Sun Cc: Pedro Falcato Cc: Peter Xu Cc: Rafael Aquini Cc: Rakie Kim Cc: Randy Dunlap Cc: Ryan Roberts Cc: Shivank Garg Cc: Steven Rostedt Cc: Suren Baghdasaryan Cc: Takashi Iwai (SUSE) Cc: Thomas Hellström Cc: Vishal Moola (Oracle) Cc: Vlastimil Babka Cc: Wei Yang Cc: Will Deacon Cc: Yang Shi Cc: Zach O'Keefe Cc: Zi Yan Signed-off-by: Andrew Morton --- mm/khugepaged.c | 30 ++++++++++++++++++------------ 1 file changed, 18 insertions(+), 12 deletions(-) --- a/mm/khugepaged.c~mm-khugepaged-run-khugepaged-for-all-orders +++ a/mm/khugepaged.c @@ -524,23 +524,23 @@ static inline int collapse_test_exit_or_ mm_flags_test(MMF_DISABLE_THP_COMPLETELY, mm); } -static bool hugepage_pmd_enabled(void) +static bool hugepage_enabled(void) { /* * We cover the anon, shmem and the file-backed case here; file-backed * hugepages, when configured in, are determined by the global control. - * Anon pmd-sized hugepages are determined by the pmd-size control. + * Anon hugepages are determined by its per-size mTHP control. * Shmem pmd-sized hugepages are also determined by its pmd-size control, * except when the global shmem_huge is set to SHMEM_HUGE_DENY. */ if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && hugepage_global_enabled()) return true; - if (test_bit(PMD_ORDER, &huge_anon_orders_always)) + if (READ_ONCE(huge_anon_orders_always)) return true; - if (test_bit(PMD_ORDER, &huge_anon_orders_madvise)) + if (READ_ONCE(huge_anon_orders_madvise)) return true; - if (test_bit(PMD_ORDER, &huge_anon_orders_inherit) && + if (READ_ONCE(huge_anon_orders_inherit) && hugepage_global_enabled()) return true; if (IS_ENABLED(CONFIG_SHMEM) && shmem_hpage_pmd_enabled()) @@ -581,7 +581,13 @@ void __khugepaged_enter(struct mm_struct static unsigned long collapse_allowable_orders(struct vm_area_struct *vma, enum tva_type tva_flags) { - unsigned long orders = BIT(HPAGE_PMD_ORDER); + unsigned long orders; + + /* If khugepaged is scanning an anonymous vma, allow mTHP collapse */ + if ((tva_flags & TVA_KHUGEPAGED) && vma_is_anonymous(vma)) + orders = THP_ORDERS_ALL_ANON; + else + orders = BIT(HPAGE_PMD_ORDER); return thp_vma_allowable_orders(vma, vma->vm_flags, tva_flags, orders); } @@ -589,7 +595,7 @@ static unsigned long collapse_allowable_ void khugepaged_enter_vma(struct vm_area_struct *vma) { if (!mm_flags_test(MMF_VM_HUGEPAGE, vma->vm_mm) && - hugepage_pmd_enabled()) { + hugepage_enabled()) { if (collapse_allowable_orders(vma, TVA_KHUGEPAGED)) __khugepaged_enter(vma->vm_mm); } @@ -2936,7 +2942,7 @@ breakouterloop_mmap_lock: static int khugepaged_has_work(void) { - return !list_empty(&khugepaged_scan.mm_head) && hugepage_pmd_enabled(); + return !list_empty(&khugepaged_scan.mm_head) && hugepage_enabled(); } static int khugepaged_wait_event(void) @@ -3009,7 +3015,7 @@ static void khugepaged_wait_work(void) return; } - if (hugepage_pmd_enabled()) + if (hugepage_enabled()) wait_event_freezable(khugepaged_wait, khugepaged_wait_event()); } @@ -3040,7 +3046,7 @@ void set_recommended_min_free_kbytes(voi int nr_zones = 0; unsigned long recommended_min; - if (!hugepage_pmd_enabled()) { + if (!hugepage_enabled()) { calculate_min_free_kbytes(); goto update_wmarks; } @@ -3090,7 +3096,7 @@ int start_stop_khugepaged(void) int err = 0; mutex_lock(&khugepaged_mutex); - if (hugepage_pmd_enabled()) { + if (hugepage_enabled()) { if (!khugepaged_thread) khugepaged_thread = kthread_run(khugepaged, NULL, "khugepaged"); @@ -3116,7 +3122,7 @@ fail: void khugepaged_min_free_kbytes_update(void) { mutex_lock(&khugepaged_mutex); - if (hugepage_pmd_enabled() && khugepaged_thread) + if (hugepage_enabled() && khugepaged_thread) set_recommended_min_free_kbytes(); mutex_unlock(&khugepaged_mutex); } _ Patches currently in -mm which might be from baolin.wang@linux.alibaba.com are revert-tmpfs-dont-enable-large-folios-if-not-supported.patch