From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6415613C9C4 for ; Sun, 3 May 2026 13:20:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777814433; cv=none; b=JO/JRWHkyGbGlr0ZWnk6BYDj83z0C3fwHGd4aK5mAyRL5ng8+TrSEsTs5es1DMBKo9140tgnnxcWN4x4l7z6Z6mhUeXvOj99Bg0Wg98e4T4LiJi9VaWlRTv/SKKAX+EAzK/HcAaqrnp3RkQHvt007GXrwzjuWLKym4qF1GXZdEQ= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777814433; c=relaxed/simple; bh=zjk7rwl8sff5bpLtrd5NjR1rj9SScZB+5nGGVnma4K4=; h=Date:To:From:Subject:Message-Id; b=cS/2NcJ+VErgbOVmTtEYrYPTGq6sdILGHVST8yrw7HEPdL7+vf3/ik/xmWQwFxwrN0dlluGkmHHpgcyydIVjvm/c+74FIovd3RgIXnQ80PUwU4KtttYiJwumQsxB7FVo10/U0ECbd53ZljX2MhukTdOsevvoIvs+gEvqL6KidP8= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux-foundation.org header.i=@linux-foundation.org header.b=O+w9w0+j; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux-foundation.org header.i=@linux-foundation.org header.b="O+w9w0+j" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9D403C2BCB4; Sun, 3 May 2026 13:20:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1777814433; bh=zjk7rwl8sff5bpLtrd5NjR1rj9SScZB+5nGGVnma4K4=; h=Date:To:From:Subject:From; b=O+w9w0+j+LUIbRaq8fRhN40pifV/HBfd6QFWLMVECpOKg1B91f8/Zghtwa3Z1L4vh zLzCP+Ko2jwpLqvDJ1b7Mi/zn2xJrwsxjpk0XzbyKoUMJy1MQm16lDn1MY4wfbrstF Iw6YAWvX7P+TUa2fhT5Y89dNOQsoQzK7h7IgVy9M= Date: Sun, 03 May 2026 06:20:24 -0700 To: mm-commits@vger.kernel.org,zokeefe@google.com,ziy@nvidia.com,ying.huang@linux.alibaba.com,yang@os.amperecomputing.com,willy@infradead.org,will@kernel.org,wangkefeng.wang@huawei.com,vishal.moola@gmail.com,vbabka@suse.cz,usama.arif@linux.dev,tiwai@suse.de,thomas.hellstrom@linux.intel.com,surenb@google.com,sunnanyong@huawei.com,shivankg@amd.com,ryan.roberts@arm.com,rppt@kernel.org,rostedt@goodmis.org,rientjes@google.com,richard.weiyang@gmail.com,rdunlap@infradead.org,raquini@redhat.com,rakie.kim@sk.com,pfalcato@suse.de,peterx@redhat.com,npache@redhat.com,mhocko@suse.com,mhiramat@kernel.org,matthew.brost@intel.com,mathieu.desnoyers@efficios.com,ljs@kernel.org,liam@infradead.org,lance.yang@linux.dev,joshua.hahnjy@gmail.com,jannh@google.com,jack@suse.cz,jackmanb@google.com,hughd@google.com,hannes@cmpxchg.org,gourry@gourry.net,david@kernel.org,corbet@lwn.net,catalin.marinas@arm.com,byungchul@sk.com,baolin.wang@linux.alibaba.com,baohua@kernel.org,bagasdotme@gmail.com,apopple@nvidia.com,anshuman.khandual@arm.com,aarcange@redhat.com,dev.jain@arm.com,akpm@linux-foundation.org From: Andrew Morton Subject: [to-be-updated] mm-khugepaged-generalize-alloc_charge_folio.patch removed from -mm tree Message-Id: <20260503132032.9D403C2BCB4@smtp.kernel.org> Precedence: bulk X-Mailing-List: mm-commits@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: The quilt patch titled Subject: mm/khugepaged: generalize alloc_charge_folio() has been removed from the -mm tree. Its filename was mm-khugepaged-generalize-alloc_charge_folio.patch This patch was dropped because an updated version will be issued ------------------------------------------------------ From: Dev Jain Subject: mm/khugepaged: generalize alloc_charge_folio() Date: Sun, 19 Apr 2026 12:57:39 -0600 Pass order to alloc_charge_folio() and update mTHP statistics. Link: https://lore.kernel.org/20260419185750.260784-3-npache@redhat.com Co-developed-by: Nico Pache Signed-off-by: Nico Pache Signed-off-by: Dev Jain Reviewed-by: Wei Yang Reviewed-by: Lance Yang Reviewed-by: Baolin Wang Reviewed-by: Lorenzo Stoakes Reviewed-by: Zi Yan Acked-by: David Hildenbrand (Arm) Acked-by: Usama Arif Cc: Alistair Popple Cc: Andrea Arcangeli Cc: Anshuman Khandual Cc: Bagas Sanjaya Cc: Barry Song Cc: Brendan Jackman Cc: Byungchul Park Cc: Catalin Marinas Cc: David Rientjes Cc: Gregory Price Cc: "Huang, Ying" Cc: Hugh Dickins Cc: Jan Kara Cc: Jann Horn Cc: Johannes Weiner Cc: Jonathan Corbet Cc: Joshua Hahn Cc: Kefeng Wang Cc: Liam Howlett Cc: "Masami Hiramatsu (Google)" Cc: Mathieu Desnoyers Cc: Matthew Brost Cc: Matthew Wilcox (Oracle) Cc: Michal Hocko Cc: Mike Rapoport Cc: Nanyong Sun Cc: Pedro Falcato Cc: Peter Xu Cc: Rafael Aquini Cc: Rakie Kim Cc: Randy Dunlap Cc: Ryan Roberts Cc: Shivank Garg Cc: Steven Rostedt Cc: Suren Baghdasaryan Cc: Takashi Iwai (SUSE) Cc: Thomas Hellström Cc: Vishal Moola (Oracle) Cc: Vlastimil Babka Cc: Will Deacon Cc: Yang Shi Cc: Zach O'Keefe Signed-off-by: Andrew Morton --- Documentation/admin-guide/mm/transhuge.rst | 8 ++++++++ include/linux/huge_mm.h | 2 ++ mm/huge_memory.c | 4 ++++ mm/khugepaged.c | 17 +++++++++++------ 4 files changed, 25 insertions(+), 6 deletions(-) --- a/Documentation/admin-guide/mm/transhuge.rst~mm-khugepaged-generalize-alloc_charge_folio +++ a/Documentation/admin-guide/mm/transhuge.rst @@ -639,6 +639,14 @@ anon_fault_fallback_charge instead falls back to using huge pages with lower orders or small pages even though the allocation was successful. +collapse_alloc + is incremented every time a huge page is successfully allocated for a + khugepaged collapse. + +collapse_alloc_failed + is incremented every time a huge page allocation fails during a + khugepaged collapse. + zswpout is incremented every time a huge page is swapped out to zswap in one piece without splitting. --- a/include/linux/huge_mm.h~mm-khugepaged-generalize-alloc_charge_folio +++ a/include/linux/huge_mm.h @@ -128,6 +128,8 @@ enum mthp_stat_item { MTHP_STAT_ANON_FAULT_ALLOC, MTHP_STAT_ANON_FAULT_FALLBACK, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE, + MTHP_STAT_COLLAPSE_ALLOC, + MTHP_STAT_COLLAPSE_ALLOC_FAILED, MTHP_STAT_ZSWPOUT, MTHP_STAT_SWPIN, MTHP_STAT_SWPIN_FALLBACK, --- a/mm/huge_memory.c~mm-khugepaged-generalize-alloc_charge_folio +++ a/mm/huge_memory.c @@ -685,6 +685,8 @@ static struct kobj_attribute _name##_att DEFINE_MTHP_STAT_ATTR(anon_fault_alloc, MTHP_STAT_ANON_FAULT_ALLOC); DEFINE_MTHP_STAT_ATTR(anon_fault_fallback, MTHP_STAT_ANON_FAULT_FALLBACK); DEFINE_MTHP_STAT_ATTR(anon_fault_fallback_charge, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE); +DEFINE_MTHP_STAT_ATTR(collapse_alloc, MTHP_STAT_COLLAPSE_ALLOC); +DEFINE_MTHP_STAT_ATTR(collapse_alloc_failed, MTHP_STAT_COLLAPSE_ALLOC_FAILED); DEFINE_MTHP_STAT_ATTR(zswpout, MTHP_STAT_ZSWPOUT); DEFINE_MTHP_STAT_ATTR(swpin, MTHP_STAT_SWPIN); DEFINE_MTHP_STAT_ATTR(swpin_fallback, MTHP_STAT_SWPIN_FALLBACK); @@ -750,6 +752,8 @@ static struct attribute *any_stats_attrs #endif &split_attr.attr, &split_failed_attr.attr, + &collapse_alloc_attr.attr, + &collapse_alloc_failed_attr.attr, NULL, }; --- a/mm/khugepaged.c~mm-khugepaged-generalize-alloc_charge_folio +++ a/mm/khugepaged.c @@ -1068,21 +1068,26 @@ out: } static enum scan_result alloc_charge_folio(struct folio **foliop, struct mm_struct *mm, - struct collapse_control *cc) + struct collapse_control *cc, unsigned int order) { gfp_t gfp = (cc->is_khugepaged ? alloc_hugepage_khugepaged_gfpmask() : GFP_TRANSHUGE); int node = collapse_find_target_node(cc); struct folio *folio; - folio = __folio_alloc(gfp, HPAGE_PMD_ORDER, node, &cc->alloc_nmask); + folio = __folio_alloc(gfp, order, node, &cc->alloc_nmask); if (!folio) { *foliop = NULL; - count_vm_event(THP_COLLAPSE_ALLOC_FAILED); + if (is_pmd_order(order)) + count_vm_event(THP_COLLAPSE_ALLOC_FAILED); + count_mthp_stat(order, MTHP_STAT_COLLAPSE_ALLOC_FAILED); return SCAN_ALLOC_HUGE_PAGE_FAIL; } - count_vm_event(THP_COLLAPSE_ALLOC); + if (is_pmd_order(order)) + count_vm_event(THP_COLLAPSE_ALLOC); + count_mthp_stat(order, MTHP_STAT_COLLAPSE_ALLOC); + if (unlikely(mem_cgroup_charge(folio, mm, gfp))) { folio_put(folio); *foliop = NULL; @@ -1118,7 +1123,7 @@ static enum scan_result collapse_huge_pa */ mmap_read_unlock(mm); - result = alloc_charge_folio(&folio, mm, cc); + result = alloc_charge_folio(&folio, mm, cc, HPAGE_PMD_ORDER); if (result != SCAN_SUCCEED) goto out_nolock; @@ -1899,7 +1904,7 @@ static enum scan_result collapse_file(st VM_BUG_ON(!IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && !is_shmem); VM_BUG_ON(start & (HPAGE_PMD_NR - 1)); - result = alloc_charge_folio(&new_folio, mm, cc); + result = alloc_charge_folio(&new_folio, mm, cc, HPAGE_PMD_ORDER); if (result != SCAN_SUCCEED) goto out; _ Patches currently in -mm which might be from dev.jain@arm.com are selftests-mm-simplify-byte-pattern-checking-in-mremap_test.patch