linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/4] Remove leftover mlock/munlock page wrappers
@ 2023-01-16 19:28 Matthew Wilcox (Oracle)
  2023-01-16 19:28 ` [PATCH 1/4] mm: Remove page_evictable() Matthew Wilcox (Oracle)
                   ` (3 more replies)
  0 siblings, 4 replies; 6+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-01-16 19:28 UTC (permalink / raw)
  To: linux-mm, Andrew Morton; +Cc: Matthew Wilcox (Oracle)

We no longer need the various mlock page functions as all callers have
folios.

Matthew Wilcox (Oracle) (4):
  mm: Remove page_evictable()
  mm: Remove mlock_vma_page()
  mm: Remove munlock_vma_page()
  mm: Clean up mlock_page / munlock_page references in comments

 Documentation/mm/unevictable-lru.rst | 129 ++++++++++++++-------------
 kernel/events/uprobes.c              |   1 -
 mm/internal.h                        |  29 +-----
 mm/memory-failure.c                  |   2 +-
 mm/mlock.c                           |   4 +-
 mm/rmap.c                            |  16 ++--
 mm/swap.c                            |   4 +-
 7 files changed, 81 insertions(+), 104 deletions(-)

-- 
2.35.1



^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 1/4] mm: Remove page_evictable()
  2023-01-16 19:28 [PATCH 0/4] Remove leftover mlock/munlock page wrappers Matthew Wilcox (Oracle)
@ 2023-01-16 19:28 ` Matthew Wilcox (Oracle)
  2023-01-19 10:35   ` Mike Rapoport
  2023-01-16 19:28 ` [PATCH 2/4] mm: Remove mlock_vma_page() Matthew Wilcox (Oracle)
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 6+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-01-16 19:28 UTC (permalink / raw)
  To: linux-mm, Andrew Morton; +Cc: Matthew Wilcox (Oracle)

This function now has no users.  Also update the unevictable-lru
documentation to discuss folios instead of pages (mostly).

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 Documentation/mm/unevictable-lru.rst | 89 ++++++++++++++--------------
 mm/internal.h                        | 11 ----
 2 files changed, 46 insertions(+), 54 deletions(-)

diff --git a/Documentation/mm/unevictable-lru.rst b/Documentation/mm/unevictable-lru.rst
index 2a90d0721dd9..1972d37d97cf 100644
--- a/Documentation/mm/unevictable-lru.rst
+++ b/Documentation/mm/unevictable-lru.rst
@@ -12,7 +12,7 @@ Introduction
 
 This document describes the Linux memory manager's "Unevictable LRU"
 infrastructure and the use of this to manage several types of "unevictable"
-pages.
+folios.
 
 The document attempts to provide the overall rationale behind this mechanism
 and the rationale for some of the design decisions that drove the
@@ -27,8 +27,8 @@ The Unevictable LRU
 ===================
 
 The Unevictable LRU facility adds an additional LRU list to track unevictable
-pages and to hide these pages from vmscan.  This mechanism is based on a patch
-by Larry Woodman of Red Hat to address several scalability problems with page
+folios and to hide these folios from vmscan.  This mechanism is based on a patch
+by Larry Woodman of Red Hat to address several scalability problems with folio
 reclaim in Linux.  The problems have been observed at customer sites on large
 memory x86_64 systems.
 
@@ -52,40 +52,41 @@ The infrastructure may also be able to handle other conditions that make pages
 unevictable, either by definition or by circumstance, in the future.
 
 
-The Unevictable LRU Page List
------------------------------
+The Unevictable LRU Folio List
+------------------------------
 
-The Unevictable LRU page list is a lie.  It was never an LRU-ordered list, but a
-companion to the LRU-ordered anonymous and file, active and inactive page lists;
-and now it is not even a page list.  But following familiar convention, here in
-this document and in the source, we often imagine it as a fifth LRU page list.
+The Unevictable LRU folio list is a lie.  It was never an LRU-ordered
+list, but a companion to the LRU-ordered anonymous and file, active and
+inactive folio lists; and now it is not even a folio list.  But following
+familiar convention, here in this document and in the source, we often
+imagine it as a fifth LRU folio list.
 
 The Unevictable LRU infrastructure consists of an additional, per-node, LRU list
-called the "unevictable" list and an associated page flag, PG_unevictable, to
-indicate that the page is being managed on the unevictable list.
+called the "unevictable" list and an associated folio flag, PG_unevictable, to
+indicate that the folio is being managed on the unevictable list.
 
 The PG_unevictable flag is analogous to, and mutually exclusive with, the
-PG_active flag in that it indicates on which LRU list a page resides when
+PG_active flag in that it indicates on which LRU list a folio resides when
 PG_lru is set.
 
-The Unevictable LRU infrastructure maintains unevictable pages as if they were
+The Unevictable LRU infrastructure maintains unevictable folios as if they were
 on an additional LRU list for a few reasons:
 
- (1) We get to "treat unevictable pages just like we treat other pages in the
+ (1) We get to "treat unevictable folios just like we treat other folios in the
      system - which means we get to use the same code to manipulate them, the
      same code to isolate them (for migrate, etc.), the same code to keep track
      of the statistics, etc..." [Rik van Riel]
 
- (2) We want to be able to migrate unevictable pages between nodes for memory
+ (2) We want to be able to migrate unevictable folios between nodes for memory
      defragmentation, workload management and memory hotplug.  The Linux kernel
-     can only migrate pages that it can successfully isolate from the LRU
+     can only migrate folios that it can successfully isolate from the LRU
      lists (or "Movable" pages: outside of consideration here).  If we were to
-     maintain pages elsewhere than on an LRU-like list, where they can be
-     detected by isolate_lru_page(), we would prevent their migration.
+     maintain folios elsewhere than on an LRU-like list, where they can be
+     detected by folio_isolate_lru(), we would prevent their migration.
 
-The unevictable list does not differentiate between file-backed and anonymous,
-swap-backed pages.  This differentiation is only important while the pages are,
-in fact, evictable.
+The unevictable list does not differentiate between file-backed and
+anonymous, swap-backed folios.  This differentiation is only important
+while the folios are, in fact, evictable.
 
 The unevictable list benefits from the "arrayification" of the per-node LRU
 lists and statistics originally proposed and posted by Christoph Lameter.
@@ -158,7 +159,7 @@ These are currently used in three places in the kernel:
 Detecting Unevictable Pages
 ---------------------------
 
-The function page_evictable() in mm/internal.h determines whether a page is
+The function folio_evictable() in mm/internal.h determines whether a folio is
 evictable or not using the query function outlined above [see section
 :ref:`Marking address spaces unevictable <mark_addr_space_unevict>`]
 to check the AS_UNEVICTABLE flag.
@@ -167,7 +168,7 @@ For address spaces that are so marked after being populated (as SHM regions
 might be), the lock action (e.g. SHM_LOCK) can be lazy, and need not populate
 the page tables for the region as does, for example, mlock(), nor need it make
 any special effort to push any pages in the SHM_LOCK'd area to the unevictable
-list.  Instead, vmscan will do this if and when it encounters the pages during
+list.  Instead, vmscan will do this if and when it encounters the folios during
 a reclamation scan.
 
 On an unlock action (such as SHM_UNLOCK), the unlocker (e.g. shmctl()) must scan
@@ -176,41 +177,43 @@ condition is keeping them unevictable.  If an unevictable region is destroyed,
 the pages are also "rescued" from the unevictable list in the process of
 freeing them.
 
-page_evictable() also checks for mlocked pages by testing an additional page
-flag, PG_mlocked (as wrapped by PageMlocked()), which is set when a page is
-faulted into a VM_LOCKED VMA, or found in a VMA being VM_LOCKED.
+folio_evictable() also checks for mlocked folios by calling
+folio_test_mlocked(), which is set when a folio is faulted into a
+VM_LOCKED VMA, or found in a VMA being VM_LOCKED.
 
 
-Vmscan's Handling of Unevictable Pages
+Vmscan's Handling of Unevictable Folios
 --------------------------------------
 
-If unevictable pages are culled in the fault path, or moved to the unevictable
-list at mlock() or mmap() time, vmscan will not encounter the pages until they
+If unevictable folios are culled in the fault path, or moved to the unevictable
+list at mlock() or mmap() time, vmscan will not encounter the folios until they
 have become evictable again (via munlock() for example) and have been "rescued"
 from the unevictable list.  However, there may be situations where we decide,
-for the sake of expediency, to leave an unevictable page on one of the regular
+for the sake of expediency, to leave an unevictable folio on one of the regular
 active/inactive LRU lists for vmscan to deal with.  vmscan checks for such
-pages in all of the shrink_{active|inactive|page}_list() functions and will
-"cull" such pages that it encounters: that is, it diverts those pages to the
+folios in all of the shrink_{active|inactive|page}_list() functions and will
+"cull" such folios that it encounters: that is, it diverts those folios to the
 unevictable list for the memory cgroup and node being scanned.
 
-There may be situations where a page is mapped into a VM_LOCKED VMA, but the
-page is not marked as PG_mlocked.  Such pages will make it all the way to
-shrink_active_list() or shrink_page_list() where they will be detected when
-vmscan walks the reverse map in folio_referenced() or try_to_unmap().  The page
-is culled to the unevictable list when it is released by the shrinker.
+There may be situations where a folio is mapped into a VM_LOCKED VMA,
+but the folio does not have the mlocked flag set.  Such folios will make
+it all the way to shrink_active_list() or shrink_page_list() where they
+will be detected when vmscan walks the reverse map in folio_referenced()
+or try_to_unmap().  The folio is culled to the unevictable list when it
+is released by the shrinker.
 
-To "cull" an unevictable page, vmscan simply puts the page back on the LRU list
-using putback_lru_page() - the inverse operation to isolate_lru_page() - after
-dropping the page lock.  Because the condition which makes the page unevictable
-may change once the page is unlocked, __pagevec_lru_add_fn() will recheck the
-unevictable state of a page before placing it on the unevictable list.
+To "cull" an unevictable folio, vmscan simply puts the folio back on
+the LRU list using folio_putback_lru() - the inverse operation to
+folio_isolate_lru() - after dropping the folio lock.  Because the
+condition which makes the folio unevictable may change once the folio
+is unlocked, __pagevec_lru_add_fn() will recheck the unevictable state
+of a folio before placing it on the unevictable list.
 
 
 MLOCKED Pages
 =============
 
-The unevictable page list is also useful for mlock(), in addition to ramfs and
+The unevictable folio list is also useful for mlock(), in addition to ramfs and
 SYSV SHM.  Note that mlock() is only available in CONFIG_MMU=y situations; in
 NOMMU situations, all mappings are effectively mlocked.
 
diff --git a/mm/internal.h b/mm/internal.h
index 2d09a7a0600a..74bc1fe45711 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -159,17 +159,6 @@ static inline bool folio_evictable(struct folio *folio)
 	return ret;
 }
 
-static inline bool page_evictable(struct page *page)
-{
-	bool ret;
-
-	/* Prevent address_space of inode and swap cache from being freed */
-	rcu_read_lock();
-	ret = !mapping_unevictable(page_mapping(page)) && !PageMlocked(page);
-	rcu_read_unlock();
-	return ret;
-}
-
 /*
  * Turn a non-refcounted page (->_refcount == 0) into refcounted with
  * a count of one.
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 2/4] mm: Remove mlock_vma_page()
  2023-01-16 19:28 [PATCH 0/4] Remove leftover mlock/munlock page wrappers Matthew Wilcox (Oracle)
  2023-01-16 19:28 ` [PATCH 1/4] mm: Remove page_evictable() Matthew Wilcox (Oracle)
@ 2023-01-16 19:28 ` Matthew Wilcox (Oracle)
  2023-01-16 19:28 ` [PATCH 3/4] mm: Remove munlock_vma_page() Matthew Wilcox (Oracle)
  2023-01-16 19:28 ` [PATCH 4/4] mm: Clean up mlock_page / munlock_page references in comments Matthew Wilcox (Oracle)
  3 siblings, 0 replies; 6+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-01-16 19:28 UTC (permalink / raw)
  To: linux-mm, Andrew Morton; +Cc: Matthew Wilcox (Oracle)

All callers now have a folio and can call mlock_vma_folio().  Update
the documentation to refer to mlock_vma_folio().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 Documentation/mm/unevictable-lru.rst |  6 +++---
 mm/internal.h                        | 10 +---------
 mm/mlock.c                           |  4 ++--
 mm/rmap.c                            |  4 ++--
 4 files changed, 8 insertions(+), 16 deletions(-)

diff --git a/Documentation/mm/unevictable-lru.rst b/Documentation/mm/unevictable-lru.rst
index 1972d37d97cf..45aadfefb810 100644
--- a/Documentation/mm/unevictable-lru.rst
+++ b/Documentation/mm/unevictable-lru.rst
@@ -311,7 +311,7 @@ do end up getting faulted into this VM_LOCKED VMA, they will be handled in the
 fault path - which is also how mlock2()'s MLOCK_ONFAULT areas are handled.
 
 For each PTE (or PMD) being faulted into a VMA, the page add rmap function
-calls mlock_vma_page(), which calls mlock_folio() when the VMA is VM_LOCKED
+calls mlock_vma_folio(), which calls mlock_folio() when the VMA is VM_LOCKED
 (unless it is a PTE mapping of a part of a transparent huge page).  Or when
 it is a newly allocated anonymous page, folio_add_lru_vma() calls
 mlock_new_folio() instead: similar to mlock_folio(), but can make better
@@ -413,7 +413,7 @@ However, since mlock_vma_pages_range() starts by setting VM_LOCKED on a VMA,
 before mlocking any pages already present, if one of those pages were migrated
 before mlock_pte_range() reached it, it would get counted twice in mlock_count.
 To prevent that, mlock_vma_pages_range() temporarily marks the VMA as VM_IO,
-so that mlock_vma_page() will skip it.
+so that mlock_vma_folio() will skip it.
 
 To complete page migration, we place the old and new pages back onto the LRU
 afterwards.  The "unneeded" page - old page on success, new page on failure -
@@ -552,6 +552,6 @@ and node unevictable list.
 
 rmap's folio_referenced_one(), called via vmscan's shrink_active_list() or
 shrink_page_list(), and rmap's try_to_unmap_one() called via shrink_page_list(),
-check for (3) pages still mapped into VM_LOCKED VMAs, and call mlock_vma_page()
+check for (3) pages still mapped into VM_LOCKED VMAs, and call mlock_vma_folio()
 to correct them.  Such pages are culled to the unevictable list when released
 by the shrinker.
diff --git a/mm/internal.h b/mm/internal.h
index 74bc1fe45711..0b74105ea363 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -518,7 +518,7 @@ extern long faultin_vma_page_range(struct vm_area_struct *vma,
 extern int mlock_future_check(struct mm_struct *mm, unsigned long flags,
 			      unsigned long len);
 /*
- * mlock_vma_page() and munlock_vma_page():
+ * mlock_vma_folio() and munlock_vma_folio():
  * should be called with vma's mmap_lock held for read or write,
  * under page table lock for the pte/pmd being added or removed.
  *
@@ -547,12 +547,6 @@ static inline void mlock_vma_folio(struct folio *folio,
 		mlock_folio(folio);
 }
 
-static inline void mlock_vma_page(struct page *page,
-			struct vm_area_struct *vma, bool compound)
-{
-	mlock_vma_folio(page_folio(page), vma, compound);
-}
-
 void munlock_folio(struct folio *folio);
 
 static inline void munlock_vma_folio(struct folio *folio,
@@ -656,8 +650,6 @@ static inline struct file *maybe_unlock_mmap_for_io(struct vm_fault *vmf,
 }
 #else /* !CONFIG_MMU */
 static inline void unmap_mapping_folio(struct folio *folio) { }
-static inline void mlock_vma_page(struct page *page,
-			struct vm_area_struct *vma, bool compound) { }
 static inline void munlock_vma_page(struct page *page,
 			struct vm_area_struct *vma, bool compound) { }
 static inline void mlock_new_folio(struct folio *folio) { }
diff --git a/mm/mlock.c b/mm/mlock.c
index 9e9c8be58277..b680f11879c3 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -370,9 +370,9 @@ static void mlock_vma_pages_range(struct vm_area_struct *vma,
 	/*
 	 * There is a slight chance that concurrent page migration,
 	 * or page reclaim finding a page of this now-VM_LOCKED vma,
-	 * will call mlock_vma_page() and raise page's mlock_count:
+	 * will call mlock_vma_folio() and raise page's mlock_count:
 	 * double counting, leaving the page unevictable indefinitely.
-	 * Communicate this danger to mlock_vma_page() with VM_IO,
+	 * Communicate this danger to mlock_vma_folio() with VM_IO,
 	 * which is a VM_SPECIAL flag not allowed on VM_LOCKED vmas.
 	 * mmap_lock is held in write mode here, so this weird
 	 * combination should not be visible to other mmap_lock users;
diff --git a/mm/rmap.c b/mm/rmap.c
index ab2246e6f20a..1934f9dc9758 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1261,7 +1261,7 @@ void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma,
 			__page_check_anon_rmap(page, vma, address);
 	}
 
-	mlock_vma_page(page, vma, compound);
+	mlock_vma_folio(folio, vma, compound);
 }
 
 /**
@@ -1352,7 +1352,7 @@ void page_add_file_rmap(struct page *page, struct vm_area_struct *vma,
 	if (nr)
 		__lruvec_stat_mod_folio(folio, NR_FILE_MAPPED, nr);
 
-	mlock_vma_page(page, vma, compound);
+	mlock_vma_folio(folio, vma, compound);
 }
 
 /**
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 3/4] mm: Remove munlock_vma_page()
  2023-01-16 19:28 [PATCH 0/4] Remove leftover mlock/munlock page wrappers Matthew Wilcox (Oracle)
  2023-01-16 19:28 ` [PATCH 1/4] mm: Remove page_evictable() Matthew Wilcox (Oracle)
  2023-01-16 19:28 ` [PATCH 2/4] mm: Remove mlock_vma_page() Matthew Wilcox (Oracle)
@ 2023-01-16 19:28 ` Matthew Wilcox (Oracle)
  2023-01-16 19:28 ` [PATCH 4/4] mm: Clean up mlock_page / munlock_page references in comments Matthew Wilcox (Oracle)
  3 siblings, 0 replies; 6+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-01-16 19:28 UTC (permalink / raw)
  To: linux-mm, Andrew Morton; +Cc: Matthew Wilcox (Oracle)

All callers now have a folio and can call munlock_vma_folio().  Update
the documentation to refer to munlock_vma_folio().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 Documentation/mm/unevictable-lru.rst |  4 ++--
 kernel/events/uprobes.c              |  1 -
 mm/internal.h                        |  8 --------
 mm/rmap.c                            | 12 ++++++------
 4 files changed, 8 insertions(+), 17 deletions(-)

diff --git a/Documentation/mm/unevictable-lru.rst b/Documentation/mm/unevictable-lru.rst
index 45aadfefb810..9afceabf26f7 100644
--- a/Documentation/mm/unevictable-lru.rst
+++ b/Documentation/mm/unevictable-lru.rst
@@ -486,7 +486,7 @@ Before the unevictable/mlock changes, mlocking did not mark the pages in any
 way, so unmapping them required no processing.
 
 For each PTE (or PMD) being unmapped from a VMA, page_remove_rmap() calls
-munlock_vma_page(), which calls munlock_page() when the VMA is VM_LOCKED
+munlock_vma_folio(), which calls munlock_folio() when the VMA is VM_LOCKED
 (unless it was a PTE mapping of a part of a transparent huge page).
 
 munlock_page() uses the mlock pagevec to batch up work to be done under
@@ -510,7 +510,7 @@ which had been Copied-On-Write from the file pages now being truncated.
 
 Mlocked pages can be munlocked and deleted in this way: like with munmap(),
 for each PTE (or PMD) being unmapped from a VMA, page_remove_rmap() calls
-munlock_vma_page(), which calls munlock_page() when the VMA is VM_LOCKED
+munlock_vma_folio(), which calls munlock_folio() when the VMA is VM_LOCKED
 (unless it was a PTE mapping of a part of a transparent huge page).
 
 However, if there is a racing munlock(), since mlock_vma_pages_range() starts
diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index 29f36d2ae129..1a3904e0179c 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -22,7 +22,6 @@
 #include <linux/swap.h>		/* folio_free_swap */
 #include <linux/ptrace.h>	/* user_enable_single_step */
 #include <linux/kdebug.h>	/* notifier mechanism */
-#include "../../mm/internal.h"	/* munlock_vma_page */
 #include <linux/percpu-rwsem.h>
 #include <linux/task_work.h>
 #include <linux/shmem_fs.h>
diff --git a/mm/internal.h b/mm/internal.h
index 0b74105ea363..ce462bf145b4 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -548,7 +548,6 @@ static inline void mlock_vma_folio(struct folio *folio,
 }
 
 void munlock_folio(struct folio *folio);
-
 static inline void munlock_vma_folio(struct folio *folio,
 			struct vm_area_struct *vma, bool compound)
 {
@@ -557,11 +556,6 @@ static inline void munlock_vma_folio(struct folio *folio,
 		munlock_folio(folio);
 }
 
-static inline void munlock_vma_page(struct page *page,
-			struct vm_area_struct *vma, bool compound)
-{
-	munlock_vma_folio(page_folio(page), vma, compound);
-}
 void mlock_new_folio(struct folio *folio);
 bool need_mlock_drain(int cpu);
 void mlock_drain_local(void);
@@ -650,8 +644,6 @@ static inline struct file *maybe_unlock_mmap_for_io(struct vm_fault *vmf,
 }
 #else /* !CONFIG_MMU */
 static inline void unmap_mapping_folio(struct folio *folio) { }
-static inline void munlock_vma_page(struct page *page,
-			struct vm_area_struct *vma, bool compound) { }
 static inline void mlock_new_folio(struct folio *folio) { }
 static inline bool need_mlock_drain(int cpu) { return false; }
 static inline void mlock_drain_local(void) { }
diff --git a/mm/rmap.c b/mm/rmap.c
index 1934f9dc9758..948ca17a96ad 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1432,14 +1432,14 @@ void page_remove_rmap(struct page *page, struct vm_area_struct *vma,
 	}
 
 	/*
-	 * It would be tidy to reset PageAnon mapping when fully unmapped,
-	 * but that might overwrite a racing page_add_anon_rmap
-	 * which increments mapcount after us but sets mapping
-	 * before us: so leave the reset to free_pages_prepare,
-	 * and remember that it's only reliable while mapped.
+	 * It would be tidy to reset folio_test_anon mapping when fully
+	 * unmapped, but that might overwrite a racing page_add_anon_rmap
+	 * which increments mapcount after us but sets mapping before us:
+	 * so leave the reset to free_pages_prepare, and remember that
+	 * it's only reliable while mapped.
 	 */
 
-	munlock_vma_page(page, vma, compound);
+	munlock_vma_folio(folio, vma, compound);
 }
 
 /*
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 4/4] mm: Clean up mlock_page / munlock_page references in comments
  2023-01-16 19:28 [PATCH 0/4] Remove leftover mlock/munlock page wrappers Matthew Wilcox (Oracle)
                   ` (2 preceding siblings ...)
  2023-01-16 19:28 ` [PATCH 3/4] mm: Remove munlock_vma_page() Matthew Wilcox (Oracle)
@ 2023-01-16 19:28 ` Matthew Wilcox (Oracle)
  3 siblings, 0 replies; 6+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-01-16 19:28 UTC (permalink / raw)
  To: linux-mm, Andrew Morton; +Cc: Matthew Wilcox (Oracle)

Change documentation and comments that refer to now-renamed functions.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 Documentation/mm/unevictable-lru.rst | 30 +++++++++++++++-------------
 mm/memory-failure.c                  |  2 +-
 mm/swap.c                            |  4 ++--
 3 files changed, 19 insertions(+), 17 deletions(-)

diff --git a/Documentation/mm/unevictable-lru.rst b/Documentation/mm/unevictable-lru.rst
index 9afceabf26f7..0662254d8267 100644
--- a/Documentation/mm/unevictable-lru.rst
+++ b/Documentation/mm/unevictable-lru.rst
@@ -298,7 +298,7 @@ treated as a no-op and mlock_fixup() simply returns.
 If the VMA passes some filtering as described in "Filtering Special VMAs"
 below, mlock_fixup() will attempt to merge the VMA with its neighbors or split
 off a subset of the VMA if the range does not cover the entire VMA.  Any pages
-already present in the VMA are then marked as mlocked by mlock_page() via
+already present in the VMA are then marked as mlocked by mlock_folio() via
 mlock_pte_range() via walk_page_range() via mlock_vma_pages_range().
 
 Before returning from the system call, do_mlock() or mlockall() will call
@@ -373,20 +373,21 @@ Because of the VMA filtering discussed above, VM_LOCKED will not be set in
 any "special" VMAs.  So, those VMAs will be ignored for munlock.
 
 If the VMA is VM_LOCKED, mlock_fixup() again attempts to merge or split off the
-specified range.  All pages in the VMA are then munlocked by munlock_page() via
+specified range.  All pages in the VMA are then munlocked by munlock_folio() via
 mlock_pte_range() via walk_page_range() via mlock_vma_pages_range() - the same
 function used when mlocking a VMA range, with new flags for the VMA indicating
 that it is munlock() being performed.
 
-munlock_page() uses the mlock pagevec to batch up work to be done under
-lru_lock by  __munlock_page().  __munlock_page() decrements the page's
-mlock_count, and when that reaches 0 it clears PG_mlocked and clears
-PG_unevictable, moving the page from unevictable state to inactive LRU.
+munlock_folio() uses the mlock pagevec to batch up work to be done
+under lru_lock by  __munlock_folio().  __munlock_folio() decrements the
+folio's mlock_count, and when that reaches 0 it clears the mlocked flag
+and clears the unevictable flag, moving the folio from unevictable state
+to the inactive LRU.
 
-But in practice that may not work ideally: the page may not yet have reached
+But in practice that may not work ideally: the folio may not yet have reached
 "the unevictable LRU", or it may have been temporarily isolated from it.  In
 those cases its mlock_count field is unusable and must be assumed to be 0: so
-that the page will be rescued to an evictable LRU, then perhaps be mlocked
+that the folio will be rescued to an evictable LRU, then perhaps be mlocked
 again later if vmscan finds it in a VM_LOCKED VMA.
 
 
@@ -489,15 +490,16 @@ For each PTE (or PMD) being unmapped from a VMA, page_remove_rmap() calls
 munlock_vma_folio(), which calls munlock_folio() when the VMA is VM_LOCKED
 (unless it was a PTE mapping of a part of a transparent huge page).
 
-munlock_page() uses the mlock pagevec to batch up work to be done under
-lru_lock by  __munlock_page().  __munlock_page() decrements the page's
-mlock_count, and when that reaches 0 it clears PG_mlocked and clears
-PG_unevictable, moving the page from unevictable state to inactive LRU.
+munlock_folio() uses the mlock pagevec to batch up work to be done
+under lru_lock by  __munlock_folio().  __munlock_folio() decrements the
+folio's mlock_count, and when that reaches 0 it clears the mlocked flag
+and clears the unevictable flag, moving the folio from unevictable state
+to the inactive LRU.
 
-But in practice that may not work ideally: the page may not yet have reached
+But in practice that may not work ideally: the folio may not yet have reached
 "the unevictable LRU", or it may have been temporarily isolated from it.  In
 those cases its mlock_count field is unusable and must be assumed to be 0: so
-that the page will be rescued to an evictable LRU, then perhaps be mlocked
+that the folio will be rescued to an evictable LRU, then perhaps be mlocked
 again later if vmscan finds it in a VM_LOCKED VMA.
 
 
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index ee8548a8b049..2dad72c1b281 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -2167,7 +2167,7 @@ int memory_failure(unsigned long pfn, int flags)
 	}
 
 	/*
-	 * __munlock_pagevec may clear a writeback page's LRU flag without
+	 * __munlock_folio() may clear a writeback page's LRU flag without
 	 * page_lock. We need wait writeback completion for this page or it
 	 * may trigger vfs BUG while evict inode.
 	 */
diff --git a/mm/swap.c b/mm/swap.c
index 5e4f92700c16..2a51faa34e64 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -201,7 +201,7 @@ static void lru_add_fn(struct lruvec *lruvec, struct folio *folio)
 	 * Is an smp_mb__after_atomic() still required here, before
 	 * folio_evictable() tests the mlocked flag, to rule out the possibility
 	 * of stranding an evictable folio on an unevictable LRU?  I think
-	 * not, because __munlock_page() only clears the mlocked flag
+	 * not, because __munlock_folio() only clears the mlocked flag
 	 * while the LRU lock is held.
 	 *
 	 * (That is not true of __page_cache_release(), and not necessarily
@@ -216,7 +216,7 @@ static void lru_add_fn(struct lruvec *lruvec, struct folio *folio)
 		folio_set_unevictable(folio);
 		/*
 		 * folio->mlock_count = !!folio_test_mlocked(folio)?
-		 * But that leaves __mlock_page() in doubt whether another
+		 * But that leaves __mlock_folio() in doubt whether another
 		 * actor has already counted the mlock or not.  Err on the
 		 * safe side, underestimate, let page reclaim fix it, rather
 		 * than leaving a page on the unevictable LRU indefinitely.
-- 
2.35.1



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH 1/4] mm: Remove page_evictable()
  2023-01-16 19:28 ` [PATCH 1/4] mm: Remove page_evictable() Matthew Wilcox (Oracle)
@ 2023-01-19 10:35   ` Mike Rapoport
  0 siblings, 0 replies; 6+ messages in thread
From: Mike Rapoport @ 2023-01-19 10:35 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle); +Cc: linux-mm, Andrew Morton

On Mon, Jan 16, 2023 at 07:28:24PM +0000, Matthew Wilcox (Oracle) wrote:
> This function now has no users.  Also update the unevictable-lru
> documentation to discuss folios instead of pages (mostly).

Heh, it's ~30 out of ~180 ;-)
It looks to me that we have more places where unevictable-lru documentation
should use folios rather than pages.

> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>  Documentation/mm/unevictable-lru.rst | 89 ++++++++++++++--------------
>  mm/internal.h                        | 11 ----
>  2 files changed, 46 insertions(+), 54 deletions(-)
> 
> diff --git a/Documentation/mm/unevictable-lru.rst b/Documentation/mm/unevictable-lru.rst
> index 2a90d0721dd9..1972d37d97cf 100644
> --- a/Documentation/mm/unevictable-lru.rst
> +++ b/Documentation/mm/unevictable-lru.rst
> @@ -12,7 +12,7 @@ Introduction
>  
>  This document describes the Linux memory manager's "Unevictable LRU"
>  infrastructure and the use of this to manage several types of "unevictable"
> -pages.
> +folios.
>  
>  The document attempts to provide the overall rationale behind this mechanism
>  and the rationale for some of the design decisions that drove the
> @@ -27,8 +27,8 @@ The Unevictable LRU
>  ===================
>  
>  The Unevictable LRU facility adds an additional LRU list to track unevictable
> -pages and to hide these pages from vmscan.  This mechanism is based on a patch
> -by Larry Woodman of Red Hat to address several scalability problems with page
> +folios and to hide these folios from vmscan.  This mechanism is based on a patch
> +by Larry Woodman of Red Hat to address several scalability problems with folio
>  reclaim in Linux.  The problems have been observed at customer sites on large
>  memory x86_64 systems.
>  
> @@ -52,40 +52,41 @@ The infrastructure may also be able to handle other conditions that make pages
>  unevictable, either by definition or by circumstance, in the future.
>  
>  
> -The Unevictable LRU Page List
> ------------------------------
> +The Unevictable LRU Folio List
> +------------------------------
>  
> -The Unevictable LRU page list is a lie.  It was never an LRU-ordered list, but a
> -companion to the LRU-ordered anonymous and file, active and inactive page lists;
> -and now it is not even a page list.  But following familiar convention, here in
> -this document and in the source, we often imagine it as a fifth LRU page list.
> +The Unevictable LRU folio list is a lie.  It was never an LRU-ordered
> +list, but a companion to the LRU-ordered anonymous and file, active and
> +inactive folio lists; and now it is not even a folio list.  But following
> +familiar convention, here in this document and in the source, we often
> +imagine it as a fifth LRU folio list.
>  
>  The Unevictable LRU infrastructure consists of an additional, per-node, LRU list
> -called the "unevictable" list and an associated page flag, PG_unevictable, to
> -indicate that the page is being managed on the unevictable list.
> +called the "unevictable" list and an associated folio flag, PG_unevictable, to
> +indicate that the folio is being managed on the unevictable list.
>  
>  The PG_unevictable flag is analogous to, and mutually exclusive with, the
> -PG_active flag in that it indicates on which LRU list a page resides when
> +PG_active flag in that it indicates on which LRU list a folio resides when
>  PG_lru is set.
>  
> -The Unevictable LRU infrastructure maintains unevictable pages as if they were
> +The Unevictable LRU infrastructure maintains unevictable folios as if they were
>  on an additional LRU list for a few reasons:
>  
> - (1) We get to "treat unevictable pages just like we treat other pages in the
> + (1) We get to "treat unevictable folios just like we treat other folios in the
>       system - which means we get to use the same code to manipulate them, the
>       same code to isolate them (for migrate, etc.), the same code to keep track
>       of the statistics, etc..." [Rik van Riel]
>  
> - (2) We want to be able to migrate unevictable pages between nodes for memory
> + (2) We want to be able to migrate unevictable folios between nodes for memory
>       defragmentation, workload management and memory hotplug.  The Linux kernel
> -     can only migrate pages that it can successfully isolate from the LRU
> +     can only migrate folios that it can successfully isolate from the LRU
>       lists (or "Movable" pages: outside of consideration here).  If we were to
> -     maintain pages elsewhere than on an LRU-like list, where they can be
> -     detected by isolate_lru_page(), we would prevent their migration.
> +     maintain folios elsewhere than on an LRU-like list, where they can be
> +     detected by folio_isolate_lru(), we would prevent their migration.
>  
> -The unevictable list does not differentiate between file-backed and anonymous,
> -swap-backed pages.  This differentiation is only important while the pages are,
> -in fact, evictable.
> +The unevictable list does not differentiate between file-backed and
> +anonymous, swap-backed folios.  This differentiation is only important
> +while the folios are, in fact, evictable.
>  
>  The unevictable list benefits from the "arrayification" of the per-node LRU
>  lists and statistics originally proposed and posted by Christoph Lameter.
> @@ -158,7 +159,7 @@ These are currently used in three places in the kernel:
>  Detecting Unevictable Pages
>  ---------------------------
>  
> -The function page_evictable() in mm/internal.h determines whether a page is
> +The function folio_evictable() in mm/internal.h determines whether a folio is
>  evictable or not using the query function outlined above [see section
>  :ref:`Marking address spaces unevictable <mark_addr_space_unevict>`]
>  to check the AS_UNEVICTABLE flag.
> @@ -167,7 +168,7 @@ For address spaces that are so marked after being populated (as SHM regions
>  might be), the lock action (e.g. SHM_LOCK) can be lazy, and need not populate
>  the page tables for the region as does, for example, mlock(), nor need it make
>  any special effort to push any pages in the SHM_LOCK'd area to the unevictable
> -list.  Instead, vmscan will do this if and when it encounters the pages during
> +list.  Instead, vmscan will do this if and when it encounters the folios during
>  a reclamation scan.
>  
>  On an unlock action (such as SHM_UNLOCK), the unlocker (e.g. shmctl()) must scan
> @@ -176,41 +177,43 @@ condition is keeping them unevictable.  If an unevictable region is destroyed,
>  the pages are also "rescued" from the unevictable list in the process of
>  freeing them.
>  
> -page_evictable() also checks for mlocked pages by testing an additional page
> -flag, PG_mlocked (as wrapped by PageMlocked()), which is set when a page is
> -faulted into a VM_LOCKED VMA, or found in a VMA being VM_LOCKED.
> +folio_evictable() also checks for mlocked folios by calling
> +folio_test_mlocked(), which is set when a folio is faulted into a
> +VM_LOCKED VMA, or found in a VMA being VM_LOCKED.
>  
>  
> -Vmscan's Handling of Unevictable Pages
> +Vmscan's Handling of Unevictable Folios
>  --------------------------------------
>  
> -If unevictable pages are culled in the fault path, or moved to the unevictable
> -list at mlock() or mmap() time, vmscan will not encounter the pages until they
> +If unevictable folios are culled in the fault path, or moved to the unevictable
> +list at mlock() or mmap() time, vmscan will not encounter the folios until they
>  have become evictable again (via munlock() for example) and have been "rescued"
>  from the unevictable list.  However, there may be situations where we decide,
> -for the sake of expediency, to leave an unevictable page on one of the regular
> +for the sake of expediency, to leave an unevictable folio on one of the regular
>  active/inactive LRU lists for vmscan to deal with.  vmscan checks for such
> -pages in all of the shrink_{active|inactive|page}_list() functions and will
> -"cull" such pages that it encounters: that is, it diverts those pages to the
> +folios in all of the shrink_{active|inactive|page}_list() functions and will
> +"cull" such folios that it encounters: that is, it diverts those folios to the
>  unevictable list for the memory cgroup and node being scanned.
>  
> -There may be situations where a page is mapped into a VM_LOCKED VMA, but the
> -page is not marked as PG_mlocked.  Such pages will make it all the way to
> -shrink_active_list() or shrink_page_list() where they will be detected when
> -vmscan walks the reverse map in folio_referenced() or try_to_unmap().  The page
> -is culled to the unevictable list when it is released by the shrinker.
> +There may be situations where a folio is mapped into a VM_LOCKED VMA,
> +but the folio does not have the mlocked flag set.  Such folios will make
> +it all the way to shrink_active_list() or shrink_page_list() where they
> +will be detected when vmscan walks the reverse map in folio_referenced()
> +or try_to_unmap().  The folio is culled to the unevictable list when it
> +is released by the shrinker.
>  
> -To "cull" an unevictable page, vmscan simply puts the page back on the LRU list
> -using putback_lru_page() - the inverse operation to isolate_lru_page() - after
> -dropping the page lock.  Because the condition which makes the page unevictable
> -may change once the page is unlocked, __pagevec_lru_add_fn() will recheck the
> -unevictable state of a page before placing it on the unevictable list.
> +To "cull" an unevictable folio, vmscan simply puts the folio back on
> +the LRU list using folio_putback_lru() - the inverse operation to
> +folio_isolate_lru() - after dropping the folio lock.  Because the
> +condition which makes the folio unevictable may change once the folio
> +is unlocked, __pagevec_lru_add_fn() will recheck the unevictable state
> +of a folio before placing it on the unevictable list.
>  
>  
>  MLOCKED Pages
>  =============
>  
> -The unevictable page list is also useful for mlock(), in addition to ramfs and
> +The unevictable folio list is also useful for mlock(), in addition to ramfs and
>  SYSV SHM.  Note that mlock() is only available in CONFIG_MMU=y situations; in
>  NOMMU situations, all mappings are effectively mlocked.
>  
> diff --git a/mm/internal.h b/mm/internal.h
> index 2d09a7a0600a..74bc1fe45711 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -159,17 +159,6 @@ static inline bool folio_evictable(struct folio *folio)
>  	return ret;
>  }
>  
> -static inline bool page_evictable(struct page *page)
> -{
> -	bool ret;
> -
> -	/* Prevent address_space of inode and swap cache from being freed */
> -	rcu_read_lock();
> -	ret = !mapping_unevictable(page_mapping(page)) && !PageMlocked(page);
> -	rcu_read_unlock();
> -	return ret;
> -}
> -
>  /*
>   * Turn a non-refcounted page (->_refcount == 0) into refcounted with
>   * a count of one.
> -- 
> 2.35.1
> 
> 

-- 
Sincerely yours,
Mike.


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2023-01-19 10:35 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-01-16 19:28 [PATCH 0/4] Remove leftover mlock/munlock page wrappers Matthew Wilcox (Oracle)
2023-01-16 19:28 ` [PATCH 1/4] mm: Remove page_evictable() Matthew Wilcox (Oracle)
2023-01-19 10:35   ` Mike Rapoport
2023-01-16 19:28 ` [PATCH 2/4] mm: Remove mlock_vma_page() Matthew Wilcox (Oracle)
2023-01-16 19:28 ` [PATCH 3/4] mm: Remove munlock_vma_page() Matthew Wilcox (Oracle)
2023-01-16 19:28 ` [PATCH 4/4] mm: Clean up mlock_page / munlock_page references in comments Matthew Wilcox (Oracle)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).