From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
To: linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Subject: [PATCH 3/4] mm: Remove munlock_vma_page()
Date: Mon, 16 Jan 2023 19:28:26 +0000 [thread overview]
Message-ID: <20230116192827.2146732-4-willy@infradead.org> (raw)
In-Reply-To: <20230116192827.2146732-1-willy@infradead.org>
All callers now have a folio and can call munlock_vma_folio(). Update
the documentation to refer to munlock_vma_folio().
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
Documentation/mm/unevictable-lru.rst | 4 ++--
kernel/events/uprobes.c | 1 -
mm/internal.h | 8 --------
mm/rmap.c | 12 ++++++------
4 files changed, 8 insertions(+), 17 deletions(-)
diff --git a/Documentation/mm/unevictable-lru.rst b/Documentation/mm/unevictable-lru.rst
index 45aadfefb810..9afceabf26f7 100644
--- a/Documentation/mm/unevictable-lru.rst
+++ b/Documentation/mm/unevictable-lru.rst
@@ -486,7 +486,7 @@ Before the unevictable/mlock changes, mlocking did not mark the pages in any
way, so unmapping them required no processing.
For each PTE (or PMD) being unmapped from a VMA, page_remove_rmap() calls
-munlock_vma_page(), which calls munlock_page() when the VMA is VM_LOCKED
+munlock_vma_folio(), which calls munlock_folio() when the VMA is VM_LOCKED
(unless it was a PTE mapping of a part of a transparent huge page).
munlock_page() uses the mlock pagevec to batch up work to be done under
@@ -510,7 +510,7 @@ which had been Copied-On-Write from the file pages now being truncated.
Mlocked pages can be munlocked and deleted in this way: like with munmap(),
for each PTE (or PMD) being unmapped from a VMA, page_remove_rmap() calls
-munlock_vma_page(), which calls munlock_page() when the VMA is VM_LOCKED
+munlock_vma_folio(), which calls munlock_folio() when the VMA is VM_LOCKED
(unless it was a PTE mapping of a part of a transparent huge page).
However, if there is a racing munlock(), since mlock_vma_pages_range() starts
diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index 29f36d2ae129..1a3904e0179c 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -22,7 +22,6 @@
#include <linux/swap.h> /* folio_free_swap */
#include <linux/ptrace.h> /* user_enable_single_step */
#include <linux/kdebug.h> /* notifier mechanism */
-#include "../../mm/internal.h" /* munlock_vma_page */
#include <linux/percpu-rwsem.h>
#include <linux/task_work.h>
#include <linux/shmem_fs.h>
diff --git a/mm/internal.h b/mm/internal.h
index 0b74105ea363..ce462bf145b4 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -548,7 +548,6 @@ static inline void mlock_vma_folio(struct folio *folio,
}
void munlock_folio(struct folio *folio);
-
static inline void munlock_vma_folio(struct folio *folio,
struct vm_area_struct *vma, bool compound)
{
@@ -557,11 +556,6 @@ static inline void munlock_vma_folio(struct folio *folio,
munlock_folio(folio);
}
-static inline void munlock_vma_page(struct page *page,
- struct vm_area_struct *vma, bool compound)
-{
- munlock_vma_folio(page_folio(page), vma, compound);
-}
void mlock_new_folio(struct folio *folio);
bool need_mlock_drain(int cpu);
void mlock_drain_local(void);
@@ -650,8 +644,6 @@ static inline struct file *maybe_unlock_mmap_for_io(struct vm_fault *vmf,
}
#else /* !CONFIG_MMU */
static inline void unmap_mapping_folio(struct folio *folio) { }
-static inline void munlock_vma_page(struct page *page,
- struct vm_area_struct *vma, bool compound) { }
static inline void mlock_new_folio(struct folio *folio) { }
static inline bool need_mlock_drain(int cpu) { return false; }
static inline void mlock_drain_local(void) { }
diff --git a/mm/rmap.c b/mm/rmap.c
index 1934f9dc9758..948ca17a96ad 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1432,14 +1432,14 @@ void page_remove_rmap(struct page *page, struct vm_area_struct *vma,
}
/*
- * It would be tidy to reset PageAnon mapping when fully unmapped,
- * but that might overwrite a racing page_add_anon_rmap
- * which increments mapcount after us but sets mapping
- * before us: so leave the reset to free_pages_prepare,
- * and remember that it's only reliable while mapped.
+ * It would be tidy to reset folio_test_anon mapping when fully
+ * unmapped, but that might overwrite a racing page_add_anon_rmap
+ * which increments mapcount after us but sets mapping before us:
+ * so leave the reset to free_pages_prepare, and remember that
+ * it's only reliable while mapped.
*/
- munlock_vma_page(page, vma, compound);
+ munlock_vma_folio(folio, vma, compound);
}
/*
--
2.35.1
next prev parent reply other threads:[~2023-01-16 19:28 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-01-16 19:28 [PATCH 0/4] Remove leftover mlock/munlock page wrappers Matthew Wilcox (Oracle)
2023-01-16 19:28 ` [PATCH 1/4] mm: Remove page_evictable() Matthew Wilcox (Oracle)
2023-01-19 10:35 ` Mike Rapoport
2023-01-16 19:28 ` [PATCH 2/4] mm: Remove mlock_vma_page() Matthew Wilcox (Oracle)
2023-01-16 19:28 ` Matthew Wilcox (Oracle) [this message]
2023-01-16 19:28 ` [PATCH 4/4] mm: Clean up mlock_page / munlock_page references in comments Matthew Wilcox (Oracle)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230116192827.2146732-4-willy@infradead.org \
--to=willy@infradead.org \
--cc=akpm@linux-foundation.org \
--cc=linux-mm@kvack.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).