linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] Unify vma_address and vma_pgoff_address
@ 2024-03-28 22:58 Matthew Wilcox (Oracle)
  2024-03-28 22:58 ` [PATCH 1/3] mm: Correct page_mapped_in_vma() for large folios Matthew Wilcox (Oracle)
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-03-28 22:58 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm

The current vma_address() pretends that the ambiguity between head &
tail page is an advantage.  If you pass a head page to vma_address(), it
will operate on all pages in the folio, while if you pass a tail page,
it will operate on a single page.  That's not what any of the callers
actually want, so first convert all callers to use vma_pgoff_address()
and then rename vma_pgoff_address() to vma_address().

Matthew Wilcox (Oracle) (3):
  mm: Correct page_mapped_in_vma() for large folios
  mm: Remove vma_address()
  mm: Rename vma_pgoff_address back to vma_address

 mm/internal.h        | 28 ++++++++++------------------
 mm/memory-failure.c  |  2 +-
 mm/page_vma_mapped.c |  4 +++-
 mm/rmap.c            | 14 ++++++++++----
 4 files changed, 24 insertions(+), 24 deletions(-)

-- 
2.43.0



^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH 1/3] mm: Correct page_mapped_in_vma() for large folios
  2024-03-28 22:58 [PATCH 0/3] Unify vma_address and vma_pgoff_address Matthew Wilcox (Oracle)
@ 2024-03-28 22:58 ` Matthew Wilcox (Oracle)
  2024-04-02 10:07   ` David Hildenbrand
  2024-03-28 22:58 ` [PATCH 2/3] mm: Remove vma_address() Matthew Wilcox (Oracle)
  2024-03-28 22:58 ` [PATCH 3/3] mm: Rename vma_pgoff_address back to vma_address Matthew Wilcox (Oracle)
  2 siblings, 1 reply; 5+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-03-28 22:58 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm

If 'page' is the first page of a large folio then vma_address() will scan
for any page in the entire folio.  This can lead to page_mapped_in_vma()
returning true if some of the tail pages are mapped and the head page
is not.  This could lead to memory failure choosing to kill a task
unnecessarily.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/page_vma_mapped.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
index 74d2de15fb5e..ac48d6284bad 100644
--- a/mm/page_vma_mapped.c
+++ b/mm/page_vma_mapped.c
@@ -325,6 +325,8 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
  */
 int page_mapped_in_vma(struct page *page, struct vm_area_struct *vma)
 {
+	struct folio *folio = page_folio(page);
+	pgoff_t pgoff = folio->index + folio_page_idx(folio, page);
 	struct page_vma_mapped_walk pvmw = {
 		.pfn = page_to_pfn(page),
 		.nr_pages = 1,
@@ -332,7 +334,7 @@ int page_mapped_in_vma(struct page *page, struct vm_area_struct *vma)
 		.flags = PVMW_SYNC,
 	};
 
-	pvmw.address = vma_address(page, vma);
+	pvmw.address = vma_pgoff_address(pgoff, 1, vma);
 	if (pvmw.address == -EFAULT)
 		return 0;
 	if (!page_vma_mapped_walk(&pvmw))
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH 2/3] mm: Remove vma_address()
  2024-03-28 22:58 [PATCH 0/3] Unify vma_address and vma_pgoff_address Matthew Wilcox (Oracle)
  2024-03-28 22:58 ` [PATCH 1/3] mm: Correct page_mapped_in_vma() for large folios Matthew Wilcox (Oracle)
@ 2024-03-28 22:58 ` Matthew Wilcox (Oracle)
  2024-03-28 22:58 ` [PATCH 3/3] mm: Rename vma_pgoff_address back to vma_address Matthew Wilcox (Oracle)
  2 siblings, 0 replies; 5+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-03-28 22:58 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm

Convert the three remaining callers to call vma_pgoff_address()
directly.  This removes an ambiguity where we'd check just one
page if passed a tail page and all N pages if passed a head page.

Also add better kernel-doc for vma_pgoff_address().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/internal.h | 23 ++++++++---------------
 mm/rmap.c     | 12 +++++++++---
 2 files changed, 17 insertions(+), 18 deletions(-)

diff --git a/mm/internal.h b/mm/internal.h
index 8e11f7b2da21..e312cb9f7368 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -803,9 +803,14 @@ void mlock_drain_remote(int cpu);
 
 extern pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma);
 
-/*
- * Return the start of user virtual address at the specific offset within
- * a vma.
+/**
+ * vma_pgoff_address - Find the virtual address a page range is mapped at
+ * @pgoff: The page offset within its object.
+ * @nr_pages: The number of pages to consider.
+ * @vma: The vma which maps this object.
+ *
+ * If any page in this range is mapped by this VMA, return the first address
+ * where any of these pages appear.  Otherwise, return -EFAULT.
  */
 static inline unsigned long
 vma_pgoff_address(pgoff_t pgoff, unsigned long nr_pages,
@@ -828,18 +833,6 @@ vma_pgoff_address(pgoff_t pgoff, unsigned long nr_pages,
 	return address;
 }
 
-/*
- * Return the start of user virtual address of a page within a vma.
- * Returns -EFAULT if all of the page is outside the range of vma.
- * If page is a compound head, the entire compound page is considered.
- */
-static inline unsigned long
-vma_address(struct page *page, struct vm_area_struct *vma)
-{
-	VM_BUG_ON_PAGE(PageKsm(page), page);	/* KSM page->index unusable */
-	return vma_pgoff_address(page_to_pgoff(page), compound_nr(page), vma);
-}
-
 /*
  * Then at what user virtual address will none of the range be found in vma?
  * Assumes that vma_address() already returned a good starting address.
diff --git a/mm/rmap.c b/mm/rmap.c
index 5ee9e338d09b..4b08b1a06688 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -775,6 +775,8 @@ static bool should_defer_flush(struct mm_struct *mm, enum ttu_flags flags)
 unsigned long page_address_in_vma(struct page *page, struct vm_area_struct *vma)
 {
 	struct folio *folio = page_folio(page);
+	pgoff_t pgoff;
+
 	if (folio_test_anon(folio)) {
 		struct anon_vma *page__anon_vma = folio_anon_vma(folio);
 		/*
@@ -790,7 +792,9 @@ unsigned long page_address_in_vma(struct page *page, struct vm_area_struct *vma)
 		return -EFAULT;
 	}
 
-	return vma_address(page, vma);
+	/* The !page__anon_vma above handles KSM folios */
+	pgoff = folio->index + folio_page_idx(folio, page);
+	return vma_pgoff_address(pgoff, 1, vma);
 }
 
 /*
@@ -2588,7 +2592,8 @@ static void rmap_walk_anon(struct folio *folio,
 	anon_vma_interval_tree_foreach(avc, &anon_vma->rb_root,
 			pgoff_start, pgoff_end) {
 		struct vm_area_struct *vma = avc->vma;
-		unsigned long address = vma_address(&folio->page, vma);
+		unsigned long address = vma_pgoff_address(pgoff_start,
+				folio_nr_pages(folio), vma);
 
 		VM_BUG_ON_VMA(address == -EFAULT, vma);
 		cond_resched();
@@ -2649,7 +2654,8 @@ static void rmap_walk_file(struct folio *folio,
 lookup:
 	vma_interval_tree_foreach(vma, &mapping->i_mmap,
 			pgoff_start, pgoff_end) {
-		unsigned long address = vma_address(&folio->page, vma);
+		unsigned long address = vma_pgoff_address(pgoff_start,
+			       folio_nr_pages(folio), vma);
 
 		VM_BUG_ON_VMA(address == -EFAULT, vma);
 		cond_resched();
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH 3/3] mm: Rename vma_pgoff_address back to vma_address
  2024-03-28 22:58 [PATCH 0/3] Unify vma_address and vma_pgoff_address Matthew Wilcox (Oracle)
  2024-03-28 22:58 ` [PATCH 1/3] mm: Correct page_mapped_in_vma() for large folios Matthew Wilcox (Oracle)
  2024-03-28 22:58 ` [PATCH 2/3] mm: Remove vma_address() Matthew Wilcox (Oracle)
@ 2024-03-28 22:58 ` Matthew Wilcox (Oracle)
  2 siblings, 0 replies; 5+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-03-28 22:58 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm

With all callers converted, we can use the nice shorter name.
Take this opportunity to reorder the arguments to the logical
order (larger object first).

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/internal.h        |  9 ++++-----
 mm/memory-failure.c  |  2 +-
 mm/page_vma_mapped.c |  2 +-
 mm/rmap.c            | 12 ++++++------
 4 files changed, 12 insertions(+), 13 deletions(-)

diff --git a/mm/internal.h b/mm/internal.h
index e312cb9f7368..19e6ddbe7134 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -804,17 +804,16 @@ void mlock_drain_remote(int cpu);
 extern pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma);
 
 /**
- * vma_pgoff_address - Find the virtual address a page range is mapped at
+ * vma_address - Find the virtual address a page range is mapped at
+ * @vma: The vma which maps this object.
  * @pgoff: The page offset within its object.
  * @nr_pages: The number of pages to consider.
- * @vma: The vma which maps this object.
  *
  * If any page in this range is mapped by this VMA, return the first address
  * where any of these pages appear.  Otherwise, return -EFAULT.
  */
-static inline unsigned long
-vma_pgoff_address(pgoff_t pgoff, unsigned long nr_pages,
-		  struct vm_area_struct *vma)
+static inline unsigned long vma_address(struct vm_area_struct *vma,
+		pgoff_t pgoff, unsigned long nr_pages)
 {
 	unsigned long address;
 
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index bdeeb4d2b584..07d40d40ec96 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -443,7 +443,7 @@ static void __add_to_kill(struct task_struct *tsk, struct page *p,
 	tk->addr = ksm_addr ? ksm_addr : page_address_in_vma(p, vma);
 	if (is_zone_device_page(p)) {
 		if (fsdax_pgoff != FSDAX_INVALID_PGOFF)
-			tk->addr = vma_pgoff_address(fsdax_pgoff, 1, vma);
+			tk->addr = vma_address(vma, fsdax_pgoff, 1);
 		tk->size_shift = dev_pagemap_mapping_shift(vma, tk->addr);
 	} else
 		tk->size_shift = page_shift(compound_head(p));
diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
index ac48d6284bad..53b8868ede61 100644
--- a/mm/page_vma_mapped.c
+++ b/mm/page_vma_mapped.c
@@ -334,7 +334,7 @@ int page_mapped_in_vma(struct page *page, struct vm_area_struct *vma)
 		.flags = PVMW_SYNC,
 	};
 
-	pvmw.address = vma_pgoff_address(pgoff, 1, vma);
+	pvmw.address = vma_address(vma, pgoff, 1);
 	if (pvmw.address == -EFAULT)
 		return 0;
 	if (!page_vma_mapped_walk(&pvmw))
diff --git a/mm/rmap.c b/mm/rmap.c
index 4b08b1a06688..56b313aa2ebf 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -794,7 +794,7 @@ unsigned long page_address_in_vma(struct page *page, struct vm_area_struct *vma)
 
 	/* The !page__anon_vma above handles KSM folios */
 	pgoff = folio->index + folio_page_idx(folio, page);
-	return vma_pgoff_address(pgoff, 1, vma);
+	return vma_address(vma, pgoff, 1);
 }
 
 /*
@@ -1132,7 +1132,7 @@ int pfn_mkclean_range(unsigned long pfn, unsigned long nr_pages, pgoff_t pgoff,
 	if (invalid_mkclean_vma(vma, NULL))
 		return 0;
 
-	pvmw.address = vma_pgoff_address(pgoff, nr_pages, vma);
+	pvmw.address = vma_address(vma, pgoff, nr_pages);
 	VM_BUG_ON_VMA(pvmw.address == -EFAULT, vma);
 
 	return page_vma_mkclean_one(&pvmw);
@@ -2592,8 +2592,8 @@ static void rmap_walk_anon(struct folio *folio,
 	anon_vma_interval_tree_foreach(avc, &anon_vma->rb_root,
 			pgoff_start, pgoff_end) {
 		struct vm_area_struct *vma = avc->vma;
-		unsigned long address = vma_pgoff_address(pgoff_start,
-				folio_nr_pages(folio), vma);
+		unsigned long address = vma_address(vma, pgoff_start,
+				folio_nr_pages(folio));
 
 		VM_BUG_ON_VMA(address == -EFAULT, vma);
 		cond_resched();
@@ -2654,8 +2654,8 @@ static void rmap_walk_file(struct folio *folio,
 lookup:
 	vma_interval_tree_foreach(vma, &mapping->i_mmap,
 			pgoff_start, pgoff_end) {
-		unsigned long address = vma_pgoff_address(pgoff_start,
-			       folio_nr_pages(folio), vma);
+		unsigned long address = vma_address(vma, pgoff_start,
+			       folio_nr_pages(folio));
 
 		VM_BUG_ON_VMA(address == -EFAULT, vma);
 		cond_resched();
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/3] mm: Correct page_mapped_in_vma() for large folios
  2024-03-28 22:58 ` [PATCH 1/3] mm: Correct page_mapped_in_vma() for large folios Matthew Wilcox (Oracle)
@ 2024-04-02 10:07   ` David Hildenbrand
  0 siblings, 0 replies; 5+ messages in thread
From: David Hildenbrand @ 2024-04-02 10:07 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle), Andrew Morton; +Cc: linux-mm

On 28.03.24 23:58, Matthew Wilcox (Oracle) wrote:
> If 'page' is the first page of a large folio then vma_address() will scan
> for any page in the entire folio.  This can lead to page_mapped_in_vma()
> returning true if some of the tail pages are mapped and the head page
> is not.  This could lead to memory failure choosing to kill a task
> unnecessarily.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>   mm/page_vma_mapped.c | 4 +++-
>   1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
> index 74d2de15fb5e..ac48d6284bad 100644
> --- a/mm/page_vma_mapped.c
> +++ b/mm/page_vma_mapped.c
> @@ -325,6 +325,8 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
>    */
>   int page_mapped_in_vma(struct page *page, struct vm_area_struct *vma)
>   {
> +	struct folio *folio = page_folio(page);
> +	pgoff_t pgoff = folio->index + folio_page_idx(folio, page);
>   	struct page_vma_mapped_walk pvmw = {
>   		.pfn = page_to_pfn(page),
>   		.nr_pages = 1,
> @@ -332,7 +334,7 @@ int page_mapped_in_vma(struct page *page, struct vm_area_struct *vma)
>   		.flags = PVMW_SYNC,
>   	};
>   
> -	pvmw.address = vma_address(page, vma);
> +	pvmw.address = vma_pgoff_address(pgoff, 1, vma);
>   	if (pvmw.address == -EFAULT)
>   		return 0;
>   	if (!page_vma_mapped_walk(&pvmw))

Reviewed-by: David Hildenbrand <david@redhat.com>

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2024-04-02 10:07 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-03-28 22:58 [PATCH 0/3] Unify vma_address and vma_pgoff_address Matthew Wilcox (Oracle)
2024-03-28 22:58 ` [PATCH 1/3] mm: Correct page_mapped_in_vma() for large folios Matthew Wilcox (Oracle)
2024-04-02 10:07   ` David Hildenbrand
2024-03-28 22:58 ` [PATCH 2/3] mm: Remove vma_address() Matthew Wilcox (Oracle)
2024-03-28 22:58 ` [PATCH 3/3] mm: Rename vma_pgoff_address back to vma_address Matthew Wilcox (Oracle)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).