linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH rfc 00/18] mm: convert to use folio mm counter
@ 2023-11-03 14:01 Kefeng Wang
  2023-11-03 14:01 ` [PATCH 01/18] mm: add mm_counter_folio() and mm_counter_file_folio() Kefeng Wang
                   ` (18 more replies)
  0 siblings, 19 replies; 22+ messages in thread
From: Kefeng Wang @ 2023-11-03 14:01 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, linux-mm, Matthew Wilcox, David Hildenbrand,
	linux-s390, Kefeng Wang

Convert mm counter page functions to folio ones.

  mm_counter()       ->	mm_counter_folio()
  mm_counter_file()  ->	mm_counter_file_folio()

Maybe it's better to rename folio mm counter function back to mm_counter()
and mm_counter_file() after all conversion?

Kefeng Wang (18):
  mm: add mm_counter_folio() and mm_counter_file_folio()
  uprobes: use mm_counter_file_folio()
  mm: userfaultfd: use mm_counter_folio()
  mm: rmap: use mm_counter_[file]_folio()
  mm: swap: introduce pfn_swap_entry_to_folio()
  mm: huge_memory: use a folio in __split_huge_pmd_locked()
  mm: huge_memory: use a folio in zap_huge_pmd()
  mm: khugepaged: use mm_counter_file_folio() in
    collapse_pte_mapped_thp()
  mm: memory: use a folio in do_set_pmd()
  mm: memory: use mm_counter_file_folio() in copy_present_pte()
  mm: memory: use mm_counter_file_folio() in wp_page_copy()
  mm: memory: use mm_counter_file_folio() in set_pte_range()
  mm: memory: use a folio in insert_page_into_pte_locked()
  mm: remove mm_counter_file()
  mm: memory: use a folio in copy_nonpresent_pte()
  mm: use a folio in zap_pte_range()
  s390: pgtable: use mm_counter_folio() in ptep_zap_swap_entry()
  mm: remove mm_counter()

 arch/s390/mm/pgtable.c  |  4 +--
 include/linux/mm.h      | 12 +++----
 include/linux/swapops.h | 13 +++++++
 kernel/events/uprobes.c |  2 +-
 mm/huge_memory.c        | 25 +++++++------
 mm/khugepaged.c         |  4 +--
 mm/memory.c             | 77 +++++++++++++++++++++++------------------
 mm/rmap.c               | 10 +++---
 mm/userfaultfd.c        |  4 +--
 9 files changed, 88 insertions(+), 63 deletions(-)

-- 
2.27.0



^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 01/18] mm: add mm_counter_folio() and mm_counter_file_folio()
  2023-11-03 14:01 [PATCH rfc 00/18] mm: convert to use folio mm counter Kefeng Wang
@ 2023-11-03 14:01 ` Kefeng Wang
  2023-11-03 14:01 ` [PATCH 02/18] uprobes: use mm_counter_file_folio() Kefeng Wang
                   ` (17 subsequent siblings)
  18 siblings, 0 replies; 22+ messages in thread
From: Kefeng Wang @ 2023-11-03 14:01 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, linux-mm, Matthew Wilcox, David Hildenbrand,
	linux-s390, Kefeng Wang

Introduce two mm counter folio functions mm_counter_folio() and
mm_counter_file_folio(), will be used folio conversion, and it
saves a compound_head() in mm_counter().

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 include/linux/mm.h | 22 ++++++++++++++++------
 1 file changed, 16 insertions(+), 6 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index caf13e94260e..f5f76504b212 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2583,19 +2583,29 @@ static inline void dec_mm_counter(struct mm_struct *mm, int member)
 	mm_trace_rss_stat(mm, member);
 }
 
-/* Optimized variant when page is already known not to be PageAnon */
-static inline int mm_counter_file(struct page *page)
+static inline int mm_counter_file_folio(struct folio *folio)
 {
-	if (PageSwapBacked(page))
+	if (folio_test_swapbacked(folio))
 		return MM_SHMEMPAGES;
 	return MM_FILEPAGES;
 }
 
-static inline int mm_counter(struct page *page)
+/* Optimized variant when page is already known not to be PageAnon */
+static inline int mm_counter_file(struct page *page)
+{
+	return mm_counter_file_folio(page_folio(page));
+}
+
+static inline int mm_counter_folio(struct folio *folio)
 {
-	if (PageAnon(page))
+	if (folio_test_anon(folio))
 		return MM_ANONPAGES;
-	return mm_counter_file(page);
+	return mm_counter_file_folio(folio);
+}
+
+static inline int mm_counter(struct page *page)
+{
+	return mm_counter_folio(page_folio(page));
 }
 
 static inline unsigned long get_mm_rss(struct mm_struct *mm)
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 02/18] uprobes: use mm_counter_file_folio()
  2023-11-03 14:01 [PATCH rfc 00/18] mm: convert to use folio mm counter Kefeng Wang
  2023-11-03 14:01 ` [PATCH 01/18] mm: add mm_counter_folio() and mm_counter_file_folio() Kefeng Wang
@ 2023-11-03 14:01 ` Kefeng Wang
  2023-11-03 14:01 ` [PATCH 03/18] mm: userfaultfd: use mm_counter_folio() Kefeng Wang
                   ` (16 subsequent siblings)
  18 siblings, 0 replies; 22+ messages in thread
From: Kefeng Wang @ 2023-11-03 14:01 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, linux-mm, Matthew Wilcox, David Hildenbrand,
	linux-s390, Kefeng Wang

Use mm_counter_file_folio() to save one compound_head() call.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 kernel/events/uprobes.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index 435aac1d8c27..e2d3c89cc524 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -188,7 +188,7 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr,
 		dec_mm_counter(mm, MM_ANONPAGES);
 
 	if (!folio_test_anon(old_folio)) {
-		dec_mm_counter(mm, mm_counter_file(old_page));
+		dec_mm_counter(mm, mm_counter_file_folio(old_folio));
 		inc_mm_counter(mm, MM_ANONPAGES);
 	}
 
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 03/18] mm: userfaultfd: use mm_counter_folio()
  2023-11-03 14:01 [PATCH rfc 00/18] mm: convert to use folio mm counter Kefeng Wang
  2023-11-03 14:01 ` [PATCH 01/18] mm: add mm_counter_folio() and mm_counter_file_folio() Kefeng Wang
  2023-11-03 14:01 ` [PATCH 02/18] uprobes: use mm_counter_file_folio() Kefeng Wang
@ 2023-11-03 14:01 ` Kefeng Wang
  2023-11-03 14:01 ` [PATCH 04/18] mm: rmap: use mm_counter_[file]_folio() Kefeng Wang
                   ` (15 subsequent siblings)
  18 siblings, 0 replies; 22+ messages in thread
From: Kefeng Wang @ 2023-11-03 14:01 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, linux-mm, Matthew Wilcox, David Hildenbrand,
	linux-s390, Kefeng Wang

Use mm_counter_folio() to save one compound_head() call.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/userfaultfd.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index 96d9eae5c7cc..e47aa6c91ef8 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -121,10 +121,10 @@ int mfill_atomic_install_pte(pmd_t *dst_pmd,
 	}
 
 	/*
-	 * Must happen after rmap, as mm_counter() checks mapping (via
+	 * Must happen after rmap, as mm_counter_folio() checks mapping (via
 	 * PageAnon()), which is set by __page_set_anon_rmap().
 	 */
-	inc_mm_counter(dst_mm, mm_counter(page));
+	inc_mm_counter(dst_mm, mm_counter_folio(folio));
 
 	set_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte);
 
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 04/18] mm: rmap: use mm_counter_[file]_folio()
  2023-11-03 14:01 [PATCH rfc 00/18] mm: convert to use folio mm counter Kefeng Wang
                   ` (2 preceding siblings ...)
  2023-11-03 14:01 ` [PATCH 03/18] mm: userfaultfd: use mm_counter_folio() Kefeng Wang
@ 2023-11-03 14:01 ` Kefeng Wang
  2023-11-03 14:01 ` [PATCH 05/18] mm: swap: introduce pfn_swap_entry_to_folio() Kefeng Wang
                   ` (14 subsequent siblings)
  18 siblings, 0 replies; 22+ messages in thread
From: Kefeng Wang @ 2023-11-03 14:01 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, linux-mm, Matthew Wilcox, David Hildenbrand,
	linux-s390, Kefeng Wang

Use mm_counter_folio() and mm_counter_file_folio() to save five
compound_head() calls.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/rmap.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/mm/rmap.c b/mm/rmap.c
index 7a27a2b41802..9d77975eaa35 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1678,7 +1678,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
 				set_huge_pte_at(mm, address, pvmw.pte, pteval,
 						hsz);
 			} else {
-				dec_mm_counter(mm, mm_counter(&folio->page));
+				dec_mm_counter(mm, mm_counter_folio(folio));
 				set_pte_at(mm, address, pvmw.pte, pteval);
 			}
 
@@ -1693,7 +1693,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
 			 * migration) will not expect userfaults on already
 			 * copied pages.
 			 */
-			dec_mm_counter(mm, mm_counter(&folio->page));
+			dec_mm_counter(mm, mm_counter_folio(folio));
 		} else if (folio_test_anon(folio)) {
 			swp_entry_t entry = page_swap_entry(subpage);
 			pte_t swp_pte;
@@ -1801,7 +1801,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
 			 *
 			 * See Documentation/mm/mmu_notifier.rst
 			 */
-			dec_mm_counter(mm, mm_counter_file(&folio->page));
+			dec_mm_counter(mm, mm_counter_file_folio(folio));
 		}
 discard:
 		page_remove_rmap(subpage, vma, folio_test_hugetlb(folio));
@@ -2075,7 +2075,7 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
 				set_huge_pte_at(mm, address, pvmw.pte, pteval,
 						hsz);
 			} else {
-				dec_mm_counter(mm, mm_counter(&folio->page));
+				dec_mm_counter(mm, mm_counter_folio(folio));
 				set_pte_at(mm, address, pvmw.pte, pteval);
 			}
 
@@ -2090,7 +2090,7 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
 			 * migration) will not expect userfaults on already
 			 * copied pages.
 			 */
-			dec_mm_counter(mm, mm_counter(&folio->page));
+			dec_mm_counter(mm, mm_counter_folio(folio));
 		} else {
 			swp_entry_t entry;
 			pte_t swp_pte;
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 05/18] mm: swap: introduce pfn_swap_entry_to_folio()
  2023-11-03 14:01 [PATCH rfc 00/18] mm: convert to use folio mm counter Kefeng Wang
                   ` (3 preceding siblings ...)
  2023-11-03 14:01 ` [PATCH 04/18] mm: rmap: use mm_counter_[file]_folio() Kefeng Wang
@ 2023-11-03 14:01 ` Kefeng Wang
  2023-11-03 14:01 ` [PATCH 06/18] mm: huge_memory: use a folio in __split_huge_pmd_locked() Kefeng Wang
                   ` (13 subsequent siblings)
  18 siblings, 0 replies; 22+ messages in thread
From: Kefeng Wang @ 2023-11-03 14:01 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, linux-mm, Matthew Wilcox, David Hildenbrand,
	linux-s390, Kefeng Wang

Introduce a new pfn_swap_entry_to_folio(), it is similar to
pfn_swap_entry_to_page(), but return a folio, which allow us
to completely replace the struct page variables with struct
folio variables.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 include/linux/swapops.h | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/include/linux/swapops.h b/include/linux/swapops.h
index bff1e8d97de0..85cb84e4be95 100644
--- a/include/linux/swapops.h
+++ b/include/linux/swapops.h
@@ -468,6 +468,19 @@ static inline struct page *pfn_swap_entry_to_page(swp_entry_t entry)
 	return p;
 }
 
+static inline struct folio *pfn_swap_entry_to_folio(swp_entry_t entry)
+{
+	struct folio *folio = pfn_folio(swp_offset_pfn(entry));
+
+	/*
+	 * Any use of migration entries may only occur while the
+	 * corresponding folio is locked
+	 */
+	BUG_ON(is_migration_entry(entry) && !folio_test_locked(folio));
+
+	return folio;
+}
+
 /*
  * A pfn swap entry is a special type of swap entry that always has a pfn stored
  * in the swap offset. They are used to represent unaddressable device memory
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 06/18] mm: huge_memory: use a folio in __split_huge_pmd_locked()
  2023-11-03 14:01 [PATCH rfc 00/18] mm: convert to use folio mm counter Kefeng Wang
                   ` (4 preceding siblings ...)
  2023-11-03 14:01 ` [PATCH 05/18] mm: swap: introduce pfn_swap_entry_to_folio() Kefeng Wang
@ 2023-11-03 14:01 ` Kefeng Wang
  2023-11-03 14:01 ` [PATCH 07/18] mm: huge_memory: use a folio in zap_huge_pmd() Kefeng Wang
                   ` (12 subsequent siblings)
  18 siblings, 0 replies; 22+ messages in thread
From: Kefeng Wang @ 2023-11-03 14:01 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, linux-mm, Matthew Wilcox, David Hildenbrand,
	linux-s390, Kefeng Wang

Use a folio in __split_huge_pmd_locked() which replaces six
compound_head() call with two page_folio() calls.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/huge_memory.c | 19 ++++++++++---------
 1 file changed, 10 insertions(+), 9 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 34001ef9d029..054336ecab0a 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2117,6 +2117,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
 	count_vm_event(THP_SPLIT_PMD);
 
 	if (!vma_is_anonymous(vma)) {
+		struct folio *folio;
 		old_pmd = pmdp_huge_clear_flush(vma, haddr, pmd);
 		/*
 		 * We are going to unmap this huge page. So
@@ -2130,17 +2131,17 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
 			swp_entry_t entry;
 
 			entry = pmd_to_swp_entry(old_pmd);
-			page = pfn_swap_entry_to_page(entry);
+			folio = pfn_swap_entry_to_folio(entry);
 		} else {
-			page = pmd_page(old_pmd);
-			if (!PageDirty(page) && pmd_dirty(old_pmd))
-				set_page_dirty(page);
-			if (!PageReferenced(page) && pmd_young(old_pmd))
-				SetPageReferenced(page);
-			page_remove_rmap(page, vma, true);
-			put_page(page);
+			folio = page_folio(pmd_page(old_pmd));
+			if (!folio_test_dirty(folio) && pmd_dirty(old_pmd))
+				folio_set_dirty(folio);
+			if (!folio_test_referenced(folio) && pmd_young(old_pmd))
+				folio_set_referenced(folio);
+			page_remove_rmap(&folio->page, vma, true);
+			folio_put(folio);
 		}
-		add_mm_counter(mm, mm_counter_file(page), -HPAGE_PMD_NR);
+		add_mm_counter(mm, mm_counter_file_folio(folio), -HPAGE_PMD_NR);
 		return;
 	}
 
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 07/18] mm: huge_memory: use a folio in zap_huge_pmd()
  2023-11-03 14:01 [PATCH rfc 00/18] mm: convert to use folio mm counter Kefeng Wang
                   ` (5 preceding siblings ...)
  2023-11-03 14:01 ` [PATCH 06/18] mm: huge_memory: use a folio in __split_huge_pmd_locked() Kefeng Wang
@ 2023-11-03 14:01 ` Kefeng Wang
  2023-11-03 14:01 ` [PATCH 08/18] mm: khugepaged: use mm_counter_file_folio() in collapse_pte_mapped_thp() Kefeng Wang
                   ` (11 subsequent siblings)
  18 siblings, 0 replies; 22+ messages in thread
From: Kefeng Wang @ 2023-11-03 14:01 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, linux-mm, Matthew Wilcox, David Hildenbrand,
	linux-s390, Kefeng Wang

Use a folio in zap_huge_pmd() which replaces two compound_head()
call with one page_folio() call.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/huge_memory.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 054336ecab0a..2dba4d3aa2d3 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1717,6 +1717,7 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
 		spin_unlock(ptl);
 	} else {
 		struct page *page = NULL;
+		struct folio *folio = NULL;
 		int flush_needed = 1;
 
 		if (pmd_present(orig_pmd)) {
@@ -1734,13 +1735,14 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
 		} else
 			WARN_ONCE(1, "Non present huge pmd without pmd migration enabled!");
 
-		if (PageAnon(page)) {
+		folio = page_folio(page);
+		if (folio_test_anon(folio)) {
 			zap_deposited_table(tlb->mm, pmd);
 			add_mm_counter(tlb->mm, MM_ANONPAGES, -HPAGE_PMD_NR);
 		} else {
 			if (arch_needs_pgtable_deposit())
 				zap_deposited_table(tlb->mm, pmd);
-			add_mm_counter(tlb->mm, mm_counter_file(page), -HPAGE_PMD_NR);
+			add_mm_counter(tlb->mm, mm_counter_file_folio(folio), -HPAGE_PMD_NR);
 		}
 
 		spin_unlock(ptl);
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 08/18] mm: khugepaged: use mm_counter_file_folio() in collapse_pte_mapped_thp()
  2023-11-03 14:01 [PATCH rfc 00/18] mm: convert to use folio mm counter Kefeng Wang
                   ` (6 preceding siblings ...)
  2023-11-03 14:01 ` [PATCH 07/18] mm: huge_memory: use a folio in zap_huge_pmd() Kefeng Wang
@ 2023-11-03 14:01 ` Kefeng Wang
  2023-11-03 14:01 ` [PATCH 09/18] mm: memory: use a folio in do_set_pmd() Kefeng Wang
                   ` (10 subsequent siblings)
  18 siblings, 0 replies; 22+ messages in thread
From: Kefeng Wang @ 2023-11-03 14:01 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, linux-mm, Matthew Wilcox, David Hildenbrand,
	linux-s390, Kefeng Wang

Use mm_counter_file_folio() to save two compound_head() calls in
mm_counter_file_folio().

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/khugepaged.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 064654717843..a6805f4f6dea 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1630,7 +1630,7 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
 	/* step 3: set proper refcount and mm_counters. */
 	if (nr_ptes) {
 		folio_ref_sub(folio, nr_ptes);
-		add_mm_counter(mm, mm_counter_file(&folio->page), -nr_ptes);
+		add_mm_counter(mm, mm_counter_file_folio(folio), -nr_ptes);
 	}
 
 	/* step 4: remove empty page table */
@@ -1661,7 +1661,7 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
 	if (nr_ptes) {
 		flush_tlb_mm(mm);
 		folio_ref_sub(folio, nr_ptes);
-		add_mm_counter(mm, mm_counter_file(&folio->page), -nr_ptes);
+		add_mm_counter(mm, mm_counter_file_folio(folio), -nr_ptes);
 	}
 	if (start_pte)
 		pte_unmap_unlock(start_pte, ptl);
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 09/18] mm: memory: use a folio in do_set_pmd()
  2023-11-03 14:01 [PATCH rfc 00/18] mm: convert to use folio mm counter Kefeng Wang
                   ` (7 preceding siblings ...)
  2023-11-03 14:01 ` [PATCH 08/18] mm: khugepaged: use mm_counter_file_folio() in collapse_pte_mapped_thp() Kefeng Wang
@ 2023-11-03 14:01 ` Kefeng Wang
  2023-11-03 14:01 ` [PATCH 10/18] mm: memory: use mm_counter_file_folio() in copy_present_pte() Kefeng Wang
                   ` (9 subsequent siblings)
  18 siblings, 0 replies; 22+ messages in thread
From: Kefeng Wang @ 2023-11-03 14:01 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, linux-mm, Matthew Wilcox, David Hildenbrand,
	linux-s390, Kefeng Wang

Use a folio in do_set_pmd(), which save one compound_head() call.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/memory.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index 1f18ed4a5497..09009094a5f2 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4313,12 +4313,13 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page)
 	unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
 	pmd_t entry;
 	vm_fault_t ret = VM_FAULT_FALLBACK;
+	struct folio *folio;
 
 	if (!transhuge_vma_suitable(vma, haddr))
 		return ret;
 
-	page = compound_head(page);
-	if (compound_order(page) != HPAGE_PMD_ORDER)
+	folio = page_folio(page);
+	if (folio_order(folio) != HPAGE_PMD_ORDER)
 		return ret;
 
 	/*
@@ -4350,7 +4351,7 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page)
 	if (write)
 		entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
 
-	add_mm_counter(vma->vm_mm, mm_counter_file(page), HPAGE_PMD_NR);
+	add_mm_counter(vma->vm_mm, mm_counter_file_folio(folio), HPAGE_PMD_NR);
 	page_add_file_rmap(page, vma, true);
 
 	/*
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 10/18] mm: memory: use mm_counter_file_folio() in copy_present_pte()
  2023-11-03 14:01 [PATCH rfc 00/18] mm: convert to use folio mm counter Kefeng Wang
                   ` (8 preceding siblings ...)
  2023-11-03 14:01 ` [PATCH 09/18] mm: memory: use a folio in do_set_pmd() Kefeng Wang
@ 2023-11-03 14:01 ` Kefeng Wang
  2023-11-03 14:01 ` [PATCH 11/18] mm: memory: use mm_counter_file_folio() in wp_page_copy() Kefeng Wang
                   ` (8 subsequent siblings)
  18 siblings, 0 replies; 22+ messages in thread
From: Kefeng Wang @ 2023-11-03 14:01 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, linux-mm, Matthew Wilcox, David Hildenbrand,
	linux-s390, Kefeng Wang

Use mm_counter_file_folio() to save one compound_head() call.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/memory.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/memory.c b/mm/memory.c
index 09009094a5f2..d35ca499bf1c 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -960,7 +960,7 @@ copy_present_pte(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
 	} else if (page) {
 		folio_get(folio);
 		page_dup_file_rmap(page, false);
-		rss[mm_counter_file(page)]++;
+		rss[mm_counter_file_folio(folio)]++;
 	}
 
 	/*
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 11/18] mm: memory: use mm_counter_file_folio() in wp_page_copy()
  2023-11-03 14:01 [PATCH rfc 00/18] mm: convert to use folio mm counter Kefeng Wang
                   ` (9 preceding siblings ...)
  2023-11-03 14:01 ` [PATCH 10/18] mm: memory: use mm_counter_file_folio() in copy_present_pte() Kefeng Wang
@ 2023-11-03 14:01 ` Kefeng Wang
  2023-11-03 14:01 ` [PATCH 12/18] mm: memory: use mm_counter_file_folio() in set_pte_range() Kefeng Wang
                   ` (7 subsequent siblings)
  18 siblings, 0 replies; 22+ messages in thread
From: Kefeng Wang @ 2023-11-03 14:01 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, linux-mm, Matthew Wilcox, David Hildenbrand,
	linux-s390, Kefeng Wang

Use mm_counter_file_folio() to save one compound_head() call.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/memory.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/memory.c b/mm/memory.c
index d35ca499bf1c..661c649afc22 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3158,7 +3158,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
 	if (likely(vmf->pte && pte_same(ptep_get(vmf->pte), vmf->orig_pte))) {
 		if (old_folio) {
 			if (!folio_test_anon(old_folio)) {
-				dec_mm_counter(mm, mm_counter_file(&old_folio->page));
+				dec_mm_counter(mm, mm_counter_file_folio(old_folio));
 				inc_mm_counter(mm, MM_ANONPAGES);
 			}
 		} else {
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 12/18] mm: memory: use mm_counter_file_folio() in set_pte_range()
  2023-11-03 14:01 [PATCH rfc 00/18] mm: convert to use folio mm counter Kefeng Wang
                   ` (10 preceding siblings ...)
  2023-11-03 14:01 ` [PATCH 11/18] mm: memory: use mm_counter_file_folio() in wp_page_copy() Kefeng Wang
@ 2023-11-03 14:01 ` Kefeng Wang
  2023-11-03 14:01 ` [PATCH 13/18] mm: memory: use a folio in insert_page_into_pte_locked() Kefeng Wang
                   ` (6 subsequent siblings)
  18 siblings, 0 replies; 22+ messages in thread
From: Kefeng Wang @ 2023-11-03 14:01 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, linux-mm, Matthew Wilcox, David Hildenbrand,
	linux-s390, Kefeng Wang

Use mm_counter_file_folio() to save one compound_head() call in
set_pte_rang().

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/memory.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/memory.c b/mm/memory.c
index 661c649afc22..2d90da70a1c8 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4414,7 +4414,7 @@ void set_pte_range(struct vm_fault *vmf, struct folio *folio,
 		folio_add_new_anon_rmap(folio, vma, addr);
 		folio_add_lru_vma(folio, vma);
 	} else {
-		add_mm_counter(vma->vm_mm, mm_counter_file(page), nr);
+		add_mm_counter(vma->vm_mm, mm_counter_file_folio(folio), nr);
 		folio_add_file_rmap_range(folio, page, nr, vma, false);
 	}
 	set_ptes(vma->vm_mm, addr, vmf->pte, entry, nr);
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 13/18] mm: memory: use a folio in insert_page_into_pte_locked()
  2023-11-03 14:01 [PATCH rfc 00/18] mm: convert to use folio mm counter Kefeng Wang
                   ` (11 preceding siblings ...)
  2023-11-03 14:01 ` [PATCH 12/18] mm: memory: use mm_counter_file_folio() in set_pte_range() Kefeng Wang
@ 2023-11-03 14:01 ` Kefeng Wang
  2023-11-03 14:01 ` [PATCH 14/18] mm: remove mm_counter_file() Kefeng Wang
                   ` (5 subsequent siblings)
  18 siblings, 0 replies; 22+ messages in thread
From: Kefeng Wang @ 2023-11-03 14:01 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, linux-mm, Matthew Wilcox, David Hildenbrand,
	linux-s390, Kefeng Wang

Use mm_counter_file_folio() to save one compound_head() call in
insert_page_into_pte_locked().

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/memory.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index 2d90da70a1c8..584fe9a550b9 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1845,11 +1845,14 @@ static int validate_page_before_insert(struct page *page)
 static int insert_page_into_pte_locked(struct vm_area_struct *vma, pte_t *pte,
 			unsigned long addr, struct page *page, pgprot_t prot)
 {
+	struct folio *folio;
+
 	if (!pte_none(ptep_get(pte)))
 		return -EBUSY;
+	folio = page_folio(page);
 	/* Ok, finally just insert the thing.. */
-	get_page(page);
-	inc_mm_counter(vma->vm_mm, mm_counter_file(page));
+	folio_get(folio);
+	inc_mm_counter(vma->vm_mm, mm_counter_file_folio(folio));
 	page_add_file_rmap(page, vma, false);
 	set_pte_at(vma->vm_mm, addr, pte, mk_pte(page, prot));
 	return 0;
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 14/18] mm: remove mm_counter_file()
  2023-11-03 14:01 [PATCH rfc 00/18] mm: convert to use folio mm counter Kefeng Wang
                   ` (12 preceding siblings ...)
  2023-11-03 14:01 ` [PATCH 13/18] mm: memory: use a folio in insert_page_into_pte_locked() Kefeng Wang
@ 2023-11-03 14:01 ` Kefeng Wang
  2023-11-03 14:01 ` [PATCH 15/18] mm: memory: use a folio in copy_nonpresent_pte() Kefeng Wang
                   ` (4 subsequent siblings)
  18 siblings, 0 replies; 22+ messages in thread
From: Kefeng Wang @ 2023-11-03 14:01 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, linux-mm, Matthew Wilcox, David Hildenbrand,
	linux-s390, Kefeng Wang

Since no one call mm_counter_file(), remove it.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 include/linux/mm.h | 7 +------
 1 file changed, 1 insertion(+), 6 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index f5f76504b212..9353c5709c45 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2583,6 +2583,7 @@ static inline void dec_mm_counter(struct mm_struct *mm, int member)
 	mm_trace_rss_stat(mm, member);
 }
 
+/* Optimized variant when folio is already known not to be anon */
 static inline int mm_counter_file_folio(struct folio *folio)
 {
 	if (folio_test_swapbacked(folio))
@@ -2590,12 +2591,6 @@ static inline int mm_counter_file_folio(struct folio *folio)
 	return MM_FILEPAGES;
 }
 
-/* Optimized variant when page is already known not to be PageAnon */
-static inline int mm_counter_file(struct page *page)
-{
-	return mm_counter_file_folio(page_folio(page));
-}
-
 static inline int mm_counter_folio(struct folio *folio)
 {
 	if (folio_test_anon(folio))
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 15/18] mm: memory: use a folio in copy_nonpresent_pte()
  2023-11-03 14:01 [PATCH rfc 00/18] mm: convert to use folio mm counter Kefeng Wang
                   ` (13 preceding siblings ...)
  2023-11-03 14:01 ` [PATCH 14/18] mm: remove mm_counter_file() Kefeng Wang
@ 2023-11-03 14:01 ` Kefeng Wang
  2023-11-03 14:01 ` [PATCH 16/18] mm: use a folio in zap_pte_range() Kefeng Wang
                   ` (3 subsequent siblings)
  18 siblings, 0 replies; 22+ messages in thread
From: Kefeng Wang @ 2023-11-03 14:01 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, linux-mm, Matthew Wilcox, David Hildenbrand,
	linux-s390, Kefeng Wang

Use a folio in copy_nonpresent_pte() to save one compound_head() call.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/memory.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index 584fe9a550b9..fcc04dce8e8a 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -779,7 +779,7 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
 	unsigned long vm_flags = dst_vma->vm_flags;
 	pte_t orig_pte = ptep_get(src_pte);
 	pte_t pte = orig_pte;
-	struct page *page;
+	struct folio *folio;
 	swp_entry_t entry = pte_to_swp_entry(orig_pte);
 
 	if (likely(!non_swap_entry(entry))) {
@@ -801,9 +801,9 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
 		}
 		rss[MM_SWAPENTS]++;
 	} else if (is_migration_entry(entry)) {
-		page = pfn_swap_entry_to_page(entry);
+		folio = pfn_swap_entry_to_folio(entry);
 
-		rss[mm_counter(page)]++;
+		rss[mm_counter_folio(folio)]++;
 
 		if (!is_readable_migration_entry(entry) &&
 				is_cow_mapping(vm_flags)) {
@@ -822,7 +822,7 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
 			set_pte_at(src_mm, addr, src_pte, pte);
 		}
 	} else if (is_device_private_entry(entry)) {
-		page = pfn_swap_entry_to_page(entry);
+		folio = pfn_swap_entry_to_folio(entry);
 
 		/*
 		 * Update rss count even for unaddressable pages, as
@@ -833,10 +833,10 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
 		 * for unaddressable pages, at some point. But for now
 		 * keep things as they are.
 		 */
-		get_page(page);
-		rss[mm_counter(page)]++;
+		folio_get(folio);
+		rss[mm_counter_folio(folio)]++;
 		/* Cannot fail as these pages cannot get pinned. */
-		BUG_ON(page_try_dup_anon_rmap(page, false, src_vma));
+		BUG_ON(page_try_dup_anon_rmap(&folio->page, false, src_vma));
 
 		/*
 		 * We do not preserve soft-dirty information, because so
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 16/18] mm: use a folio in zap_pte_range()
  2023-11-03 14:01 [PATCH rfc 00/18] mm: convert to use folio mm counter Kefeng Wang
                   ` (14 preceding siblings ...)
  2023-11-03 14:01 ` [PATCH 15/18] mm: memory: use a folio in copy_nonpresent_pte() Kefeng Wang
@ 2023-11-03 14:01 ` Kefeng Wang
  2023-11-03 19:04   ` kernel test robot
  2023-11-03 14:01 ` [PATCH 17/18] s390: pgtable: use mm_counter_folio() in ptep_zap_swap_entry() Kefeng Wang
                   ` (2 subsequent siblings)
  18 siblings, 1 reply; 22+ messages in thread
From: Kefeng Wang @ 2023-11-03 14:01 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, linux-mm, Matthew Wilcox, David Hildenbrand,
	linux-s390, Kefeng Wang

Make should_zap_page() to take a folio and use a folio in
zap_pte_range(), which save several compound_head() calls.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/memory.c | 43 ++++++++++++++++++++++++-------------------
 1 file changed, 24 insertions(+), 19 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index fcc04dce8e8a..9b4334de9bf0 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1358,19 +1358,19 @@ static inline bool should_zap_cows(struct zap_details *details)
 	return details->even_cows;
 }
 
-/* Decides whether we should zap this page with the page pointer specified */
-static inline bool should_zap_page(struct zap_details *details, struct page *page)
+/* Decides whether we should zap this folio with the folio pointer specified */
+static inline bool should_zap_page(struct zap_details *details, struct folio *folio)
 {
-	/* If we can make a decision without *page.. */
+	/* If we can make a decision without *folio.. */
 	if (should_zap_cows(details))
 		return true;
 
-	/* E.g. the caller passes NULL for the case of a zero page */
-	if (!page)
+	/* E.g. the caller passes NULL for the case of a zero folio */
+	if (!folio)
 		return true;
 
-	/* Otherwise we should only zap non-anon pages */
-	return !PageAnon(page);
+	/* Otherwise we should only zap non-anon folios */
+	return !folio_test_anon(folio);
 }
 
 static inline bool zap_drop_file_uffd_wp(struct zap_details *details)
@@ -1423,6 +1423,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
 	arch_enter_lazy_mmu_mode();
 	do {
 		pte_t ptent = ptep_get(pte);
+		struct folio *folio = NULL;
 		struct page *page;
 
 		if (pte_none(ptent))
@@ -1435,7 +1436,10 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
 			unsigned int delay_rmap;
 
 			page = vm_normal_page(vma, addr, ptent);
-			if (unlikely(!should_zap_page(details, page)))
+			if (page)
+				folio = page_folio(page);
+
+			if (unlikely(!should_zap_page(details, folio)))
 				continue;
 			ptent = ptep_get_and_clear_full(mm, addr, pte,
 							tlb->fullmm);
@@ -1449,18 +1453,18 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
 			}
 
 			delay_rmap = 0;
-			if (!PageAnon(page)) {
+			if (!folio_test_anon(folio)) {
 				if (pte_dirty(ptent)) {
-					set_page_dirty(page);
+					folio_set_dirty(folio);
 					if (tlb_delay_rmap(tlb)) {
 						delay_rmap = 1;
 						force_flush = 1;
 					}
 				}
 				if (pte_young(ptent) && likely(vma_has_recency(vma)))
-					mark_page_accessed(page);
+					folio_mark_accessed(folio);
 			}
-			rss[mm_counter(page)]--;
+			rss[mm_counter_folio(folio)]--;
 			if (!delay_rmap) {
 				page_remove_rmap(page, vma, false);
 				if (unlikely(page_mapcount(page) < 0))
@@ -1477,9 +1481,10 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
 		entry = pte_to_swp_entry(ptent);
 		if (is_device_private_entry(entry) ||
 		    is_device_exclusive_entry(entry)) {
-			page = pfn_swap_entry_to_page(entry);
-			if (unlikely(!should_zap_page(details, page)))
+			folio = pfn_swap_entry_to_folio(entry);
+			if (unlikely(!should_zap_page(details, folio)))
 				continue;
+
 			/*
 			 * Both device private/exclusive mappings should only
 			 * work with anonymous page so far, so we don't need to
@@ -1487,10 +1492,10 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
 			 * see zap_install_uffd_wp_if_needed().
 			 */
 			WARN_ON_ONCE(!vma_is_anonymous(vma));
-			rss[mm_counter(page)]--;
+			rss[mm_counter_folio(folio)]--;
 			if (is_device_private_entry(entry))
 				page_remove_rmap(page, vma, false);
-			put_page(page);
+			folio_put(folio);
 		} else if (!non_swap_entry(entry)) {
 			/* Genuine swap entry, hence a private anon page */
 			if (!should_zap_cows(details))
@@ -1499,10 +1504,10 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
 			if (unlikely(!free_swap_and_cache(entry)))
 				print_bad_pte(vma, addr, ptent, NULL);
 		} else if (is_migration_entry(entry)) {
-			page = pfn_swap_entry_to_page(entry);
-			if (!should_zap_page(details, page))
+			folio = pfn_swap_entry_to_folio(entry);
+			if (!should_zap_page(details, folio))
 				continue;
-			rss[mm_counter(page)]--;
+			rss[mm_counter_folio(folio)]--;
 		} else if (pte_marker_entry_uffd_wp(entry)) {
 			/*
 			 * For anon: always drop the marker; for file: only
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 17/18] s390: pgtable: use mm_counter_folio() in ptep_zap_swap_entry()
  2023-11-03 14:01 [PATCH rfc 00/18] mm: convert to use folio mm counter Kefeng Wang
                   ` (15 preceding siblings ...)
  2023-11-03 14:01 ` [PATCH 16/18] mm: use a folio in zap_pte_range() Kefeng Wang
@ 2023-11-03 14:01 ` Kefeng Wang
  2023-11-03 14:01 ` [PATCH 18/18] mm: remove mm_counter() Kefeng Wang
  2023-11-03 14:30 ` [PATCH rfc 00/18] mm: convert to use folio mm counter Matthew Wilcox
  18 siblings, 0 replies; 22+ messages in thread
From: Kefeng Wang @ 2023-11-03 14:01 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, linux-mm, Matthew Wilcox, David Hildenbrand,
	linux-s390, Kefeng Wang

Use mm_counter_folio() in ptep_zap_swap_entry(), which help to
cleanup mm_counter().

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 arch/s390/mm/pgtable.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/s390/mm/pgtable.c b/arch/s390/mm/pgtable.c
index 3bd2ab2a9a34..f4a53f5b0bcb 100644
--- a/arch/s390/mm/pgtable.c
+++ b/arch/s390/mm/pgtable.c
@@ -730,9 +730,9 @@ static void ptep_zap_swap_entry(struct mm_struct *mm, swp_entry_t entry)
 	if (!non_swap_entry(entry))
 		dec_mm_counter(mm, MM_SWAPENTS);
 	else if (is_migration_entry(entry)) {
-		struct page *page = pfn_swap_entry_to_page(entry);
+		struct folio *folio = pfn_swap_entry_to_folio(entry);
 
-		dec_mm_counter(mm, mm_counter(page));
+		dec_mm_counter(mm, mm_counter_folio(folio));
 	}
 	free_swap_and_cache(entry);
 }
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 18/18] mm: remove mm_counter()
  2023-11-03 14:01 [PATCH rfc 00/18] mm: convert to use folio mm counter Kefeng Wang
                   ` (16 preceding siblings ...)
  2023-11-03 14:01 ` [PATCH 17/18] s390: pgtable: use mm_counter_folio() in ptep_zap_swap_entry() Kefeng Wang
@ 2023-11-03 14:01 ` Kefeng Wang
  2023-11-03 14:30 ` [PATCH rfc 00/18] mm: convert to use folio mm counter Matthew Wilcox
  18 siblings, 0 replies; 22+ messages in thread
From: Kefeng Wang @ 2023-11-03 14:01 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-kernel, linux-mm, Matthew Wilcox, David Hildenbrand,
	linux-s390, Kefeng Wang

Since no one call mm_counter(), remove it.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 include/linux/mm.h | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 9353c5709c45..fd1a27bbdb53 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2598,11 +2598,6 @@ static inline int mm_counter_folio(struct folio *folio)
 	return mm_counter_file_folio(folio);
 }
 
-static inline int mm_counter(struct page *page)
-{
-	return mm_counter_folio(page_folio(page));
-}
-
 static inline unsigned long get_mm_rss(struct mm_struct *mm)
 {
 	return get_mm_counter(mm, MM_FILEPAGES) +
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH rfc 00/18] mm: convert to use folio mm counter
  2023-11-03 14:01 [PATCH rfc 00/18] mm: convert to use folio mm counter Kefeng Wang
                   ` (17 preceding siblings ...)
  2023-11-03 14:01 ` [PATCH 18/18] mm: remove mm_counter() Kefeng Wang
@ 2023-11-03 14:30 ` Matthew Wilcox
  2023-11-04  3:58   ` Kefeng Wang
  18 siblings, 1 reply; 22+ messages in thread
From: Matthew Wilcox @ 2023-11-03 14:30 UTC (permalink / raw)
  To: Kefeng Wang
  Cc: Andrew Morton, linux-kernel, linux-mm, David Hildenbrand,
	linux-s390

On Fri, Nov 03, 2023 at 10:01:01PM +0800, Kefeng Wang wrote:
> Convert mm counter page functions to folio ones.
> 
>   mm_counter()       ->	mm_counter_folio()
>   mm_counter_file()  ->	mm_counter_file_folio()
> 
> Maybe it's better to rename folio mm counter function back to mm_counter()
> and mm_counter_file() after all conversion?

I deliberately didn't do this because it's mostly churn.
Once all callers of mm_counter() and mm_counter_file() have been
converted to use folios, we can do one big patch to convert all
callers to pass a folio instead of a page.


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 16/18] mm: use a folio in zap_pte_range()
  2023-11-03 14:01 ` [PATCH 16/18] mm: use a folio in zap_pte_range() Kefeng Wang
@ 2023-11-03 19:04   ` kernel test robot
  0 siblings, 0 replies; 22+ messages in thread
From: kernel test robot @ 2023-11-03 19:04 UTC (permalink / raw)
  To: Kefeng Wang, Andrew Morton
  Cc: llvm, oe-kbuild-all, Linux Memory Management List, linux-kernel,
	Matthew Wilcox, David Hildenbrand, linux-s390, Kefeng Wang

Hi Kefeng,

kernel test robot noticed the following build warnings:

[auto build test WARNING on akpm-mm/mm-everything]

url:    https://github.com/intel-lab-lkp/linux/commits/Kefeng-Wang/mm-add-mm_counter_folio-and-mm_counter_file_folio/20231103-221846
base:   https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
patch link:    https://lore.kernel.org/r/20231103140119.2306578-17-wangkefeng.wang%40huawei.com
patch subject: [PATCH 16/18] mm: use a folio in zap_pte_range()
config: um-allnoconfig (https://download.01.org/0day-ci/archive/20231104/202311040217.GgQqqwfS-lkp@intel.com/config)
compiler: clang version 17.0.0 (https://github.com/llvm/llvm-project.git 4a5ac14ee968ff0ad5d2cc1ffa0299048db4c88a)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20231104/202311040217.GgQqqwfS-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202311040217.GgQqqwfS-lkp@intel.com/

All warnings (new ones prefixed by >>):

   In file included from mm/memory.c:43:
   In file included from include/linux/kernel_stat.h:9:
   In file included from include/linux/interrupt.h:11:
   In file included from include/linux/hardirq.h:11:
   In file included from arch/um/include/asm/hardirq.h:5:
   In file included from include/asm-generic/hardirq.h:17:
   In file included from include/linux/irq.h:20:
   In file included from include/linux/io.h:13:
   In file included from arch/um/include/asm/io.h:24:
   include/asm-generic/io.h:547:31: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
     547 |         val = __raw_readb(PCI_IOBASE + addr);
         |                           ~~~~~~~~~~ ^
   include/asm-generic/io.h:560:61: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
     560 |         val = __le16_to_cpu((__le16 __force)__raw_readw(PCI_IOBASE + addr));
         |                                                         ~~~~~~~~~~ ^
   include/uapi/linux/byteorder/little_endian.h:37:51: note: expanded from macro '__le16_to_cpu'
      37 | #define __le16_to_cpu(x) ((__force __u16)(__le16)(x))
         |                                                   ^
   In file included from mm/memory.c:43:
   In file included from include/linux/kernel_stat.h:9:
   In file included from include/linux/interrupt.h:11:
   In file included from include/linux/hardirq.h:11:
   In file included from arch/um/include/asm/hardirq.h:5:
   In file included from include/asm-generic/hardirq.h:17:
   In file included from include/linux/irq.h:20:
   In file included from include/linux/io.h:13:
   In file included from arch/um/include/asm/io.h:24:
   include/asm-generic/io.h:573:61: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
     573 |         val = __le32_to_cpu((__le32 __force)__raw_readl(PCI_IOBASE + addr));
         |                                                         ~~~~~~~~~~ ^
   include/uapi/linux/byteorder/little_endian.h:35:51: note: expanded from macro '__le32_to_cpu'
      35 | #define __le32_to_cpu(x) ((__force __u32)(__le32)(x))
         |                                                   ^
   In file included from mm/memory.c:43:
   In file included from include/linux/kernel_stat.h:9:
   In file included from include/linux/interrupt.h:11:
   In file included from include/linux/hardirq.h:11:
   In file included from arch/um/include/asm/hardirq.h:5:
   In file included from include/asm-generic/hardirq.h:17:
   In file included from include/linux/irq.h:20:
   In file included from include/linux/io.h:13:
   In file included from arch/um/include/asm/io.h:24:
   include/asm-generic/io.h:584:33: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
     584 |         __raw_writeb(value, PCI_IOBASE + addr);
         |                             ~~~~~~~~~~ ^
   include/asm-generic/io.h:594:59: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
     594 |         __raw_writew((u16 __force)cpu_to_le16(value), PCI_IOBASE + addr);
         |                                                       ~~~~~~~~~~ ^
   include/asm-generic/io.h:604:59: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
     604 |         __raw_writel((u32 __force)cpu_to_le32(value), PCI_IOBASE + addr);
         |                                                       ~~~~~~~~~~ ^
   include/asm-generic/io.h:692:20: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
     692 |         readsb(PCI_IOBASE + addr, buffer, count);
         |                ~~~~~~~~~~ ^
   include/asm-generic/io.h:700:20: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
     700 |         readsw(PCI_IOBASE + addr, buffer, count);
         |                ~~~~~~~~~~ ^
   include/asm-generic/io.h:708:20: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
     708 |         readsl(PCI_IOBASE + addr, buffer, count);
         |                ~~~~~~~~~~ ^
   include/asm-generic/io.h:717:21: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
     717 |         writesb(PCI_IOBASE + addr, buffer, count);
         |                 ~~~~~~~~~~ ^
   include/asm-generic/io.h:726:21: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
     726 |         writesw(PCI_IOBASE + addr, buffer, count);
         |                 ~~~~~~~~~~ ^
   include/asm-generic/io.h:735:21: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
     735 |         writesl(PCI_IOBASE + addr, buffer, count);
         |                 ~~~~~~~~~~ ^
>> mm/memory.c:1497:22: warning: variable 'page' is uninitialized when used here [-Wuninitialized]
    1497 |                                 page_remove_rmap(page, vma, false);
         |                                                  ^~~~
   mm/memory.c:1427:20: note: initialize the variable 'page' to silence this warning
    1427 |                 struct page *page;
         |                                  ^
         |                                   = NULL
   13 warnings generated.


vim +/page +1497 mm/memory.c

999dad824c39ed Peter Xu              2022-05-12  1402  
51c6f666fceb31 Robin Holt            2005-11-13  1403  static unsigned long zap_pte_range(struct mmu_gather *tlb,
b5810039a54e5b Nicholas Piggin       2005-10-29  1404  				struct vm_area_struct *vma, pmd_t *pmd,
^1da177e4c3f41 Linus Torvalds        2005-04-16  1405  				unsigned long addr, unsigned long end,
97a894136f2980 Peter Zijlstra        2011-05-24  1406  				struct zap_details *details)
^1da177e4c3f41 Linus Torvalds        2005-04-16  1407  {
b5810039a54e5b Nicholas Piggin       2005-10-29  1408  	struct mm_struct *mm = tlb->mm;
d16dfc550f5326 Peter Zijlstra        2011-05-24  1409  	int force_flush = 0;
d559db086ff5be KAMEZAWA Hiroyuki     2010-03-05  1410  	int rss[NR_MM_COUNTERS];
97a894136f2980 Peter Zijlstra        2011-05-24  1411  	spinlock_t *ptl;
5f1a19070b16c2 Steven Rostedt        2011-06-15  1412  	pte_t *start_pte;
97a894136f2980 Peter Zijlstra        2011-05-24  1413  	pte_t *pte;
8a5f14a2317706 Kirill A. Shutemov    2015-02-10  1414  	swp_entry_t entry;
d559db086ff5be KAMEZAWA Hiroyuki     2010-03-05  1415  
ed6a79352cad00 Peter Zijlstra        2018-08-31  1416  	tlb_change_page_size(tlb, PAGE_SIZE);
e303297e6c3a7b Peter Zijlstra        2011-05-24  1417  	init_rss_vec(rss);
3db82b9374ca92 Hugh Dickins          2023-06-08  1418  	start_pte = pte = pte_offset_map_lock(mm, pmd, addr, &ptl);
3db82b9374ca92 Hugh Dickins          2023-06-08  1419  	if (!pte)
3db82b9374ca92 Hugh Dickins          2023-06-08  1420  		return addr;
3db82b9374ca92 Hugh Dickins          2023-06-08  1421  
3ea277194daaea Mel Gorman            2017-08-02  1422  	flush_tlb_batched_pending(mm);
6606c3e0da5360 Zachary Amsden        2006-09-30  1423  	arch_enter_lazy_mmu_mode();
^1da177e4c3f41 Linus Torvalds        2005-04-16  1424  	do {
c33c794828f212 Ryan Roberts          2023-06-12  1425  		pte_t ptent = ptep_get(pte);
bdec140a4ef8da Kefeng Wang           2023-11-03  1426  		struct folio *folio = NULL;
8018db8525947c Peter Xu              2022-03-22  1427  		struct page *page;
8018db8525947c Peter Xu              2022-03-22  1428  
166f61b9435a1b Tobin C Harding       2017-02-24  1429  		if (pte_none(ptent))
^1da177e4c3f41 Linus Torvalds        2005-04-16  1430  			continue;
51c6f666fceb31 Robin Holt            2005-11-13  1431  
7b167b681013f5 Minchan Kim           2019-09-24  1432  		if (need_resched())
7b167b681013f5 Minchan Kim           2019-09-24  1433  			break;
7b167b681013f5 Minchan Kim           2019-09-24  1434  
6f5e6b9e69bf04 Hugh Dickins          2006-03-16  1435  		if (pte_present(ptent)) {
5df397dec7c4c0 Linus Torvalds        2022-11-09  1436  			unsigned int delay_rmap;
5df397dec7c4c0 Linus Torvalds        2022-11-09  1437  
25b2995a35b609 Christoph Hellwig     2019-06-13  1438  			page = vm_normal_page(vma, addr, ptent);
bdec140a4ef8da Kefeng Wang           2023-11-03  1439  			if (page)
bdec140a4ef8da Kefeng Wang           2023-11-03  1440  				folio = page_folio(page);
bdec140a4ef8da Kefeng Wang           2023-11-03  1441  
bdec140a4ef8da Kefeng Wang           2023-11-03  1442  			if (unlikely(!should_zap_page(details, folio)))
^1da177e4c3f41 Linus Torvalds        2005-04-16  1443  				continue;
b5810039a54e5b Nicholas Piggin       2005-10-29  1444  			ptent = ptep_get_and_clear_full(mm, addr, pte,
a600388d284193 Zachary Amsden        2005-09-03  1445  							tlb->fullmm);
e5136e876581ba Rick Edgecombe        2023-06-12  1446  			arch_check_zapped_pte(vma, ptent);
^1da177e4c3f41 Linus Torvalds        2005-04-16  1447  			tlb_remove_tlb_entry(tlb, pte, addr);
999dad824c39ed Peter Xu              2022-05-12  1448  			zap_install_uffd_wp_if_needed(vma, addr, pte, details,
999dad824c39ed Peter Xu              2022-05-12  1449  						      ptent);
e2942062e01df8 xu xin                2023-06-13  1450  			if (unlikely(!page)) {
6080d19f07043a xu xin                2023-06-13  1451  				ksm_might_unmap_zero_page(mm, ptent);
^1da177e4c3f41 Linus Torvalds        2005-04-16  1452  				continue;
e2942062e01df8 xu xin                2023-06-13  1453  			}
eca56ff906bdd0 Jerome Marchand       2016-01-14  1454  
5df397dec7c4c0 Linus Torvalds        2022-11-09  1455  			delay_rmap = 0;
bdec140a4ef8da Kefeng Wang           2023-11-03  1456  			if (!folio_test_anon(folio)) {
1cf35d47712dd5 Linus Torvalds        2014-04-25  1457  				if (pte_dirty(ptent)) {
bdec140a4ef8da Kefeng Wang           2023-11-03  1458  					folio_set_dirty(folio);
5df397dec7c4c0 Linus Torvalds        2022-11-09  1459  					if (tlb_delay_rmap(tlb)) {
5df397dec7c4c0 Linus Torvalds        2022-11-09  1460  						delay_rmap = 1;
5df397dec7c4c0 Linus Torvalds        2022-11-09  1461  						force_flush = 1;
5df397dec7c4c0 Linus Torvalds        2022-11-09  1462  					}
1cf35d47712dd5 Linus Torvalds        2014-04-25  1463  				}
8788f678148676 Yu Zhao               2022-12-30  1464  				if (pte_young(ptent) && likely(vma_has_recency(vma)))
bdec140a4ef8da Kefeng Wang           2023-11-03  1465  					folio_mark_accessed(folio);
6237bcd94851e9 Hugh Dickins          2005-10-29  1466  			}
bdec140a4ef8da Kefeng Wang           2023-11-03  1467  			rss[mm_counter_folio(folio)]--;
5df397dec7c4c0 Linus Torvalds        2022-11-09  1468  			if (!delay_rmap) {
cea86fe246b694 Hugh Dickins          2022-02-14  1469  				page_remove_rmap(page, vma, false);
3dc147414ccad8 Hugh Dickins          2009-01-06  1470  				if (unlikely(page_mapcount(page) < 0))
3dc147414ccad8 Hugh Dickins          2009-01-06  1471  					print_bad_pte(vma, addr, ptent, page);
5df397dec7c4c0 Linus Torvalds        2022-11-09  1472  			}
5df397dec7c4c0 Linus Torvalds        2022-11-09  1473  			if (unlikely(__tlb_remove_page(tlb, page, delay_rmap))) {
1cf35d47712dd5 Linus Torvalds        2014-04-25  1474  				force_flush = 1;
ce9ec37bddb633 Will Deacon           2014-10-28  1475  				addr += PAGE_SIZE;
d16dfc550f5326 Peter Zijlstra        2011-05-24  1476  				break;
1cf35d47712dd5 Linus Torvalds        2014-04-25  1477  			}
^1da177e4c3f41 Linus Torvalds        2005-04-16  1478  			continue;
^1da177e4c3f41 Linus Torvalds        2005-04-16  1479  		}
5042db43cc26f5 Jérôme Glisse         2017-09-08  1480  
5042db43cc26f5 Jérôme Glisse         2017-09-08  1481  		entry = pte_to_swp_entry(ptent);
b756a3b5e7ead8 Alistair Popple       2021-06-30  1482  		if (is_device_private_entry(entry) ||
b756a3b5e7ead8 Alistair Popple       2021-06-30  1483  		    is_device_exclusive_entry(entry)) {
bdec140a4ef8da Kefeng Wang           2023-11-03  1484  			folio = pfn_swap_entry_to_folio(entry);
bdec140a4ef8da Kefeng Wang           2023-11-03  1485  			if (unlikely(!should_zap_page(details, folio)))
5042db43cc26f5 Jérôme Glisse         2017-09-08  1486  				continue;
bdec140a4ef8da Kefeng Wang           2023-11-03  1487  
999dad824c39ed Peter Xu              2022-05-12  1488  			/*
999dad824c39ed Peter Xu              2022-05-12  1489  			 * Both device private/exclusive mappings should only
999dad824c39ed Peter Xu              2022-05-12  1490  			 * work with anonymous page so far, so we don't need to
999dad824c39ed Peter Xu              2022-05-12  1491  			 * consider uffd-wp bit when zap. For more information,
999dad824c39ed Peter Xu              2022-05-12  1492  			 * see zap_install_uffd_wp_if_needed().
999dad824c39ed Peter Xu              2022-05-12  1493  			 */
999dad824c39ed Peter Xu              2022-05-12  1494  			WARN_ON_ONCE(!vma_is_anonymous(vma));
bdec140a4ef8da Kefeng Wang           2023-11-03  1495  			rss[mm_counter_folio(folio)]--;
b756a3b5e7ead8 Alistair Popple       2021-06-30  1496  			if (is_device_private_entry(entry))
cea86fe246b694 Hugh Dickins          2022-02-14 @1497  				page_remove_rmap(page, vma, false);
bdec140a4ef8da Kefeng Wang           2023-11-03  1498  			folio_put(folio);
8018db8525947c Peter Xu              2022-03-22  1499  		} else if (!non_swap_entry(entry)) {
5abfd71d936a8a Peter Xu              2022-03-22  1500  			/* Genuine swap entry, hence a private anon page */
5abfd71d936a8a Peter Xu              2022-03-22  1501  			if (!should_zap_cows(details))
^1da177e4c3f41 Linus Torvalds        2005-04-16  1502  				continue;
b084d4353ff99d KAMEZAWA Hiroyuki     2010-03-05  1503  			rss[MM_SWAPENTS]--;
8018db8525947c Peter Xu              2022-03-22  1504  			if (unlikely(!free_swap_and_cache(entry)))
8018db8525947c Peter Xu              2022-03-22  1505  				print_bad_pte(vma, addr, ptent, NULL);
5abfd71d936a8a Peter Xu              2022-03-22  1506  		} else if (is_migration_entry(entry)) {
bdec140a4ef8da Kefeng Wang           2023-11-03  1507  			folio = pfn_swap_entry_to_folio(entry);
bdec140a4ef8da Kefeng Wang           2023-11-03  1508  			if (!should_zap_page(details, folio))
5abfd71d936a8a Peter Xu              2022-03-22  1509  				continue;
bdec140a4ef8da Kefeng Wang           2023-11-03  1510  			rss[mm_counter_folio(folio)]--;
999dad824c39ed Peter Xu              2022-05-12  1511  		} else if (pte_marker_entry_uffd_wp(entry)) {
2bad466cc9d9b4 Peter Xu              2023-03-09  1512  			/*
2bad466cc9d9b4 Peter Xu              2023-03-09  1513  			 * For anon: always drop the marker; for file: only
2bad466cc9d9b4 Peter Xu              2023-03-09  1514  			 * drop the marker if explicitly requested.
2bad466cc9d9b4 Peter Xu              2023-03-09  1515  			 */
2bad466cc9d9b4 Peter Xu              2023-03-09  1516  			if (!vma_is_anonymous(vma) &&
2bad466cc9d9b4 Peter Xu              2023-03-09  1517  			    !zap_drop_file_uffd_wp(details))
999dad824c39ed Peter Xu              2022-05-12  1518  				continue;
9f186f9e5fa9eb Miaohe Lin            2022-05-19  1519  		} else if (is_hwpoison_entry(entry) ||
af19487f00f34f Axel Rasmussen        2023-07-07  1520  			   is_poisoned_swp_entry(entry)) {
5abfd71d936a8a Peter Xu              2022-03-22  1521  			if (!should_zap_cows(details))
5abfd71d936a8a Peter Xu              2022-03-22  1522  				continue;
5abfd71d936a8a Peter Xu              2022-03-22  1523  		} else {
5abfd71d936a8a Peter Xu              2022-03-22  1524  			/* We should have covered all the swap entry types */
5abfd71d936a8a Peter Xu              2022-03-22  1525  			WARN_ON_ONCE(1);
9f9f1acd713d69 Konstantin Khlebnikov 2012-01-20  1526  		}
9888a1cae3f859 Zachary Amsden        2006-09-30  1527  		pte_clear_not_present_full(mm, addr, pte, tlb->fullmm);
999dad824c39ed Peter Xu              2022-05-12  1528  		zap_install_uffd_wp_if_needed(vma, addr, pte, details, ptent);
97a894136f2980 Peter Zijlstra        2011-05-24  1529  	} while (pte++, addr += PAGE_SIZE, addr != end);
ae859762332f19 Hugh Dickins          2005-10-29  1530  
d559db086ff5be KAMEZAWA Hiroyuki     2010-03-05  1531  	add_mm_rss_vec(mm, rss);
6606c3e0da5360 Zachary Amsden        2006-09-30  1532  	arch_leave_lazy_mmu_mode();
51c6f666fceb31 Robin Holt            2005-11-13  1533  
1cf35d47712dd5 Linus Torvalds        2014-04-25  1534  	/* Do the actual TLB flush before dropping ptl */
5df397dec7c4c0 Linus Torvalds        2022-11-09  1535  	if (force_flush) {
1cf35d47712dd5 Linus Torvalds        2014-04-25  1536  		tlb_flush_mmu_tlbonly(tlb);
5df397dec7c4c0 Linus Torvalds        2022-11-09  1537  		tlb_flush_rmaps(tlb, vma);
5df397dec7c4c0 Linus Torvalds        2022-11-09  1538  	}
1cf35d47712dd5 Linus Torvalds        2014-04-25  1539  	pte_unmap_unlock(start_pte, ptl);
1cf35d47712dd5 Linus Torvalds        2014-04-25  1540  
1cf35d47712dd5 Linus Torvalds        2014-04-25  1541  	/*
1cf35d47712dd5 Linus Torvalds        2014-04-25  1542  	 * If we forced a TLB flush (either due to running out of
1cf35d47712dd5 Linus Torvalds        2014-04-25  1543  	 * batch buffers or because we needed to flush dirty TLB
1cf35d47712dd5 Linus Torvalds        2014-04-25  1544  	 * entries before releasing the ptl), free the batched
3db82b9374ca92 Hugh Dickins          2023-06-08  1545  	 * memory too. Come back again if we didn't do everything.
1cf35d47712dd5 Linus Torvalds        2014-04-25  1546  	 */
3db82b9374ca92 Hugh Dickins          2023-06-08  1547  	if (force_flush)
fa0aafb8acb684 Peter Zijlstra        2018-09-20  1548  		tlb_flush_mmu(tlb);
d16dfc550f5326 Peter Zijlstra        2011-05-24  1549  
51c6f666fceb31 Robin Holt            2005-11-13  1550  	return addr;
^1da177e4c3f41 Linus Torvalds        2005-04-16  1551  }
^1da177e4c3f41 Linus Torvalds        2005-04-16  1552  

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH rfc 00/18] mm: convert to use folio mm counter
  2023-11-03 14:30 ` [PATCH rfc 00/18] mm: convert to use folio mm counter Matthew Wilcox
@ 2023-11-04  3:58   ` Kefeng Wang
  0 siblings, 0 replies; 22+ messages in thread
From: Kefeng Wang @ 2023-11-04  3:58 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Andrew Morton, linux-kernel, linux-mm, David Hildenbrand,
	linux-s390



On 2023/11/3 22:30, Matthew Wilcox wrote:
> On Fri, Nov 03, 2023 at 10:01:01PM +0800, Kefeng Wang wrote:
>> Convert mm counter page functions to folio ones.
>>
>>    mm_counter()       ->	mm_counter_folio()
>>    mm_counter_file()  ->	mm_counter_file_folio()
>>
>> Maybe it's better to rename folio mm counter function back to mm_counter()
>> and mm_counter_file() after all conversion?
> 
> I deliberately didn't do this because it's mostly churn.
> Once all callers of mm_counter() and mm_counter_file() have been
> converted to use folios, we can do one big patch to convert all
> callers to pass a folio instead of a page.
> 
I re-order the patches as you say, please help to check v2, thanks.


^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2023-11-04  3:59 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-11-03 14:01 [PATCH rfc 00/18] mm: convert to use folio mm counter Kefeng Wang
2023-11-03 14:01 ` [PATCH 01/18] mm: add mm_counter_folio() and mm_counter_file_folio() Kefeng Wang
2023-11-03 14:01 ` [PATCH 02/18] uprobes: use mm_counter_file_folio() Kefeng Wang
2023-11-03 14:01 ` [PATCH 03/18] mm: userfaultfd: use mm_counter_folio() Kefeng Wang
2023-11-03 14:01 ` [PATCH 04/18] mm: rmap: use mm_counter_[file]_folio() Kefeng Wang
2023-11-03 14:01 ` [PATCH 05/18] mm: swap: introduce pfn_swap_entry_to_folio() Kefeng Wang
2023-11-03 14:01 ` [PATCH 06/18] mm: huge_memory: use a folio in __split_huge_pmd_locked() Kefeng Wang
2023-11-03 14:01 ` [PATCH 07/18] mm: huge_memory: use a folio in zap_huge_pmd() Kefeng Wang
2023-11-03 14:01 ` [PATCH 08/18] mm: khugepaged: use mm_counter_file_folio() in collapse_pte_mapped_thp() Kefeng Wang
2023-11-03 14:01 ` [PATCH 09/18] mm: memory: use a folio in do_set_pmd() Kefeng Wang
2023-11-03 14:01 ` [PATCH 10/18] mm: memory: use mm_counter_file_folio() in copy_present_pte() Kefeng Wang
2023-11-03 14:01 ` [PATCH 11/18] mm: memory: use mm_counter_file_folio() in wp_page_copy() Kefeng Wang
2023-11-03 14:01 ` [PATCH 12/18] mm: memory: use mm_counter_file_folio() in set_pte_range() Kefeng Wang
2023-11-03 14:01 ` [PATCH 13/18] mm: memory: use a folio in insert_page_into_pte_locked() Kefeng Wang
2023-11-03 14:01 ` [PATCH 14/18] mm: remove mm_counter_file() Kefeng Wang
2023-11-03 14:01 ` [PATCH 15/18] mm: memory: use a folio in copy_nonpresent_pte() Kefeng Wang
2023-11-03 14:01 ` [PATCH 16/18] mm: use a folio in zap_pte_range() Kefeng Wang
2023-11-03 19:04   ` kernel test robot
2023-11-03 14:01 ` [PATCH 17/18] s390: pgtable: use mm_counter_folio() in ptep_zap_swap_entry() Kefeng Wang
2023-11-03 14:01 ` [PATCH 18/18] mm: remove mm_counter() Kefeng Wang
2023-11-03 14:30 ` [PATCH rfc 00/18] mm: convert to use folio mm counter Matthew Wilcox
2023-11-04  3:58   ` Kefeng Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).