* [merged mm-stable] mm-huge-avoid-big-else-branch-in-zap_huge_pmd.patch removed from -mm tree
@ 2026-03-31 0:43 Andrew Morton
0 siblings, 0 replies; only message in thread
From: Andrew Morton @ 2026-03-31 0:43 UTC (permalink / raw)
To: mm-commits, ziy, zhengqi.arch, surenb, ryan.roberts, rppt, npache,
mhocko, liam.howlett, lance.yang, dev.jain, david, baolin.wang,
baohua, ljs, akpm
The quilt patch titled
Subject: mm/huge: avoid big else branch in zap_huge_pmd()
has been removed from the -mm tree. Its filename was
mm-huge-avoid-big-else-branch-in-zap_huge_pmd.patch
This patch was dropped because it was merged into the mm-stable branch
of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
------------------------------------------------------
From: "Lorenzo Stoakes (Oracle)" <ljs@kernel.org>
Subject: mm/huge: avoid big else branch in zap_huge_pmd()
Date: Fri, 20 Mar 2026 18:07:19 +0000
We don't need to have an extra level of indentation, we can simply exit
early in the first two branches.
No functional change intended.
Link: https://lkml.kernel.org/r/6b4d5efdbf5554b8fe788f677d0b50f355eec999.1774029655.git.ljs@kernel.org
Signed-off-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Acked-by: Qi Zheng <zhengqi.arch@bytedance.com>
Reviewed-by: Suren Baghdasaryan <surenb@google.com>
Cc: Barry Song <baohua@kernel.org>
Cc: David Hildenbrand <david@kernel.org>
Cc: Dev Jain <dev.jain@arm.com>
Cc: Lance Yang <lance.yang@linux.dev>
Cc: Liam Howlett <liam.howlett@oracle.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Nico Pache <npache@redhat.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Zi Yan <ziy@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
mm/huge_memory.c | 93 +++++++++++++++++++++++----------------------
1 file changed, 48 insertions(+), 45 deletions(-)
--- a/mm/huge_memory.c~mm-huge-avoid-big-else-branch-in-zap_huge_pmd
+++ a/mm/huge_memory.c
@@ -2405,8 +2405,10 @@ static inline void zap_deposited_table(s
int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
pmd_t *pmd, unsigned long addr)
{
- pmd_t orig_pmd;
+ struct folio *folio = NULL;
+ int flush_needed = 1;
spinlock_t *ptl;
+ pmd_t orig_pmd;
tlb_change_page_size(tlb, HPAGE_PMD_SIZE);
@@ -2427,59 +2429,60 @@ int zap_huge_pmd(struct mmu_gather *tlb,
if (arch_needs_pgtable_deposit())
zap_deposited_table(tlb->mm, pmd);
spin_unlock(ptl);
- } else if (is_huge_zero_pmd(orig_pmd)) {
+ return 1;
+ }
+ if (is_huge_zero_pmd(orig_pmd)) {
if (!vma_is_dax(vma) || arch_needs_pgtable_deposit())
zap_deposited_table(tlb->mm, pmd);
spin_unlock(ptl);
- } else {
- struct folio *folio = NULL;
- int flush_needed = 1;
+ return 1;
+ }
+
+ if (pmd_present(orig_pmd)) {
+ struct page *page = pmd_page(orig_pmd);
- if (pmd_present(orig_pmd)) {
- struct page *page = pmd_page(orig_pmd);
+ folio = page_folio(page);
+ folio_remove_rmap_pmd(folio, page, vma);
+ WARN_ON_ONCE(folio_mapcount(folio) < 0);
+ VM_BUG_ON_PAGE(!PageHead(page), page);
+ } else if (pmd_is_valid_softleaf(orig_pmd)) {
+ const softleaf_t entry = softleaf_from_pmd(orig_pmd);
- folio = page_folio(page);
- folio_remove_rmap_pmd(folio, page, vma);
- WARN_ON_ONCE(folio_mapcount(folio) < 0);
- VM_BUG_ON_PAGE(!PageHead(page), page);
- } else if (pmd_is_valid_softleaf(orig_pmd)) {
- const softleaf_t entry = softleaf_from_pmd(orig_pmd);
-
- folio = softleaf_to_folio(entry);
- flush_needed = 0;
-
- if (!thp_migration_supported())
- WARN_ONCE(1, "Non present huge pmd without pmd migration enabled!");
- }
+ folio = softleaf_to_folio(entry);
+ flush_needed = 0;
+
+ if (!thp_migration_supported())
+ WARN_ONCE(1, "Non present huge pmd without pmd migration enabled!");
+ }
- if (folio_test_anon(folio)) {
+ if (folio_test_anon(folio)) {
+ zap_deposited_table(tlb->mm, pmd);
+ add_mm_counter(tlb->mm, MM_ANONPAGES, -HPAGE_PMD_NR);
+ } else {
+ if (arch_needs_pgtable_deposit())
zap_deposited_table(tlb->mm, pmd);
- add_mm_counter(tlb->mm, MM_ANONPAGES, -HPAGE_PMD_NR);
- } else {
- if (arch_needs_pgtable_deposit())
- zap_deposited_table(tlb->mm, pmd);
- add_mm_counter(tlb->mm, mm_counter_file(folio),
- -HPAGE_PMD_NR);
-
- /*
- * Use flush_needed to indicate whether the PMD entry
- * is present, instead of checking pmd_present() again.
- */
- if (flush_needed && pmd_young(orig_pmd) &&
- likely(vma_has_recency(vma)))
- folio_mark_accessed(folio);
- }
-
- if (folio_is_device_private(folio)) {
- folio_remove_rmap_pmd(folio, &folio->page, vma);
- WARN_ON_ONCE(folio_mapcount(folio) < 0);
- folio_put(folio);
- }
+ add_mm_counter(tlb->mm, mm_counter_file(folio),
+ -HPAGE_PMD_NR);
- spin_unlock(ptl);
- if (flush_needed)
- tlb_remove_page_size(tlb, &folio->page, HPAGE_PMD_SIZE);
+ /*
+ * Use flush_needed to indicate whether the PMD entry
+ * is present, instead of checking pmd_present() again.
+ */
+ if (flush_needed && pmd_young(orig_pmd) &&
+ likely(vma_has_recency(vma)))
+ folio_mark_accessed(folio);
}
+
+ if (folio_is_device_private(folio)) {
+ folio_remove_rmap_pmd(folio, &folio->page, vma);
+ WARN_ON_ONCE(folio_mapcount(folio) < 0);
+ folio_put(folio);
+ }
+
+ spin_unlock(ptl);
+ if (flush_needed)
+ tlb_remove_page_size(tlb, &folio->page, HPAGE_PMD_SIZE);
+
return 1;
}
_
Patches currently in -mm which might be from ljs@kernel.org are
maintainers-update-mglru-entry-to-reflect-current-status.patch
selftests-mm-add-merge-test-for-partial-msealed-range.patch
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2026-03-31 0:43 UTC | newest]
Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-31 0:43 [merged mm-stable] mm-huge-avoid-big-else-branch-in-zap_huge_pmd.patch removed from -mm tree Andrew Morton
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox