From: Ryan Roberts <ryan.roberts@arm.com>
To: Andrew Morton <akpm@linux-foundation.org>,
"Matthew Wilcox (Oracle)" <willy@infradead.org>,
Yu Zhao <yuzhao@google.com>,
"Yin, Fengwei" <fengwei.yin@intel.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>,
linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org
Subject: [RFC v2 PATCH 09/17] mm: Update wp_page_reuse() to operate on range of pages
Date: Fri, 14 Apr 2023 14:02:55 +0100 [thread overview]
Message-ID: <20230414130303.2345383-10-ryan.roberts@arm.com> (raw)
In-Reply-To: <20230414130303.2345383-1-ryan.roberts@arm.com>
We will shortly be updating do_wp_page() to be able to reuse a range of
pages from a large anon folio. As an enabling step, modify
wp_page_reuse() to operate on a range of pages, if a struct
anon_folio_range is passed in. Batching in this way allows us to batch
up the cache maintenance and event counting for small performance
improvements.
Currently all callsites pass range=NULL, so no functional changes
intended.
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
mm/memory.c | 80 +++++++++++++++++++++++++++++++++++++++--------------
1 file changed, 60 insertions(+), 20 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index f92a28064596..83835ff5a818 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3030,6 +3030,14 @@ static inline int max_anon_folio_order(struct vm_area_struct *vma)
return ANON_FOLIO_ORDER_MAX;
}
+struct anon_folio_range {
+ unsigned long va_start;
+ pte_t *pte_start;
+ struct page *pg_start;
+ int nr;
+ bool exclusive;
+};
+
/*
* Returns index of first pte that is not none, or nr if all are none.
*/
@@ -3122,31 +3130,63 @@ static int calc_anon_folio_order_alloc(struct vm_fault *vmf, int order)
* case, all we need to do here is to mark the page as writable and update
* any related book-keeping.
*/
-static inline void wp_page_reuse(struct vm_fault *vmf)
+static inline void wp_page_reuse(struct vm_fault *vmf,
+ struct anon_folio_range *range)
__releases(vmf->ptl)
{
struct vm_area_struct *vma = vmf->vma;
- struct page *page = vmf->page;
+ unsigned long addr;
+ pte_t *pte;
+ struct page *page;
+ int nr;
pte_t entry;
+ int change = 0;
+ int i;
VM_BUG_ON(!(vmf->flags & FAULT_FLAG_WRITE));
- VM_BUG_ON(page && PageAnon(page) && !PageAnonExclusive(page));
- /*
- * Clear the pages cpupid information as the existing
- * information potentially belongs to a now completely
- * unrelated process.
- */
- if (page)
- page_cpupid_xchg_last(page, (1 << LAST_CPUPID_SHIFT) - 1);
+ if (range) {
+ addr = range->va_start;
+ pte = range->pte_start;
+ page = range->pg_start;
+ nr = range->nr;
+ } else {
+ addr = vmf->address;
+ pte = vmf->pte;
+ page = vmf->page;
+ nr = 1;
+ }
+
+ if (page) {
+ for (i = 0; i < nr; i++, page++) {
+ VM_BUG_ON(PageAnon(page) && !PageAnonExclusive(page));
+
+ /*
+ * Clear the pages cpupid information as the existing
+ * information potentially belongs to a now completely
+ * unrelated process.
+ */
+ page_cpupid_xchg_last(page,
+ (1 << LAST_CPUPID_SHIFT) - 1);
+ }
+ }
+
+ flush_cache_range(vma, addr, addr + (nr << PAGE_SHIFT));
+
+ for (i = 0; i < nr; i++) {
+ entry = pte_mkyoung(pte[i]);
+ entry = maybe_mkwrite(pte_mkdirty(entry), vma);
+ change |= ptep_set_access_flags(vma,
+ addr + (i << PAGE_SHIFT),
+ pte + i,
+ entry, 1);
+ }
+
+ if (change)
+ update_mmu_cache_range(vma, addr, pte, nr);
- flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte));
- entry = pte_mkyoung(vmf->orig_pte);
- entry = maybe_mkwrite(pte_mkdirty(entry), vma);
- if (ptep_set_access_flags(vma, vmf->address, vmf->pte, entry, 1))
- update_mmu_cache(vma, vmf->address, vmf->pte);
pte_unmap_unlock(vmf->pte, vmf->ptl);
- count_vm_event(PGREUSE);
+ count_vm_events(PGREUSE, nr);
}
/*
@@ -3359,7 +3399,7 @@ vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf)
pte_unmap_unlock(vmf->pte, vmf->ptl);
return VM_FAULT_NOPAGE;
}
- wp_page_reuse(vmf);
+ wp_page_reuse(vmf, NULL);
return 0;
}
@@ -3381,7 +3421,7 @@ static vm_fault_t wp_pfn_shared(struct vm_fault *vmf)
return ret;
return finish_mkwrite_fault(vmf);
}
- wp_page_reuse(vmf);
+ wp_page_reuse(vmf, NULL);
return 0;
}
@@ -3410,7 +3450,7 @@ static vm_fault_t wp_page_shared(struct vm_fault *vmf)
return tmp;
}
} else {
- wp_page_reuse(vmf);
+ wp_page_reuse(vmf, NULL);
lock_page(vmf->page);
}
ret |= fault_dirty_shared_page(vmf);
@@ -3534,7 +3574,7 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf)
pte_unmap_unlock(vmf->pte, vmf->ptl);
return 0;
}
- wp_page_reuse(vmf);
+ wp_page_reuse(vmf, NULL);
return 0;
}
copy:
--
2.25.1
next prev parent reply other threads:[~2023-04-14 13:03 UTC|newest]
Thread overview: 44+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-04-14 13:02 [RFC v2 PATCH 00/17] variable-order, large folios for anonymous memory Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 01/17] mm: Expose clear_huge_page() unconditionally Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 02/17] mm: pass gfp flags and order to vma_alloc_zeroed_movable_folio() Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 03/17] mm: Introduce try_vma_alloc_movable_folio() Ryan Roberts
2023-04-17 8:49 ` Yin, Fengwei
2023-04-17 10:11 ` Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 04/17] mm: Implement folio_add_new_anon_rmap_range() Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 05/17] mm: Routines to determine max anon folio allocation order Ryan Roberts
2023-04-14 14:09 ` Kirill A. Shutemov
2023-04-14 14:38 ` Ryan Roberts
2023-04-14 15:37 ` Kirill A. Shutemov
2023-04-14 16:06 ` Ryan Roberts
2023-04-14 16:18 ` Matthew Wilcox
2023-04-14 16:31 ` Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 06/17] mm: Allocate large folios for anonymous memory Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 07/17] mm: Allow deferred splitting of arbitrary large anon folios Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 08/17] mm: Implement folio_move_anon_rmap_range() Ryan Roberts
2023-04-14 13:02 ` Ryan Roberts [this message]
2023-04-14 13:02 ` [RFC v2 PATCH 10/17] mm: Reuse large folios for anonymous memory Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 11/17] mm: Split __wp_page_copy_user() into 2 variants Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 12/17] mm: ptep_clear_flush_range_notify() macro for batch operation Ryan Roberts
2023-04-14 13:02 ` [RFC v2 PATCH 13/17] mm: Implement folio_remove_rmap_range() Ryan Roberts
2023-04-14 13:03 ` [RFC v2 PATCH 14/17] mm: Copy large folios for anonymous memory Ryan Roberts
2023-04-14 13:03 ` [RFC v2 PATCH 15/17] mm: Convert zero page to large folios on write Ryan Roberts
2023-04-14 13:03 ` [RFC v2 PATCH 16/17] mm: mmap: Align unhinted maps to highest anon folio order Ryan Roberts
2023-04-17 8:25 ` Yin, Fengwei
2023-04-17 10:13 ` Ryan Roberts
2023-04-14 13:03 ` [RFC v2 PATCH 17/17] mm: Batch-zap large anonymous folio PTE mappings Ryan Roberts
2023-04-17 8:04 ` [RFC v2 PATCH 00/17] variable-order, large folios for anonymous memory Yin, Fengwei
2023-04-17 10:19 ` Ryan Roberts
2023-04-17 8:19 ` Yin, Fengwei
2023-04-17 10:28 ` Ryan Roberts
2023-04-17 10:54 ` David Hildenbrand
2023-04-17 11:43 ` Ryan Roberts
2023-04-17 14:05 ` David Hildenbrand
2023-04-17 15:38 ` Ryan Roberts
2023-04-17 15:44 ` David Hildenbrand
2023-04-17 16:15 ` Ryan Roberts
2023-04-26 10:41 ` Ryan Roberts
2023-05-17 13:58 ` David Hildenbrand
2023-05-18 11:23 ` Ryan Roberts
2023-04-19 10:12 ` Ryan Roberts
2023-04-19 10:51 ` David Hildenbrand
2023-04-19 11:13 ` Ryan Roberts
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230414130303.2345383-10-ryan.roberts@arm.com \
--to=ryan.roberts@arm.com \
--cc=akpm@linux-foundation.org \
--cc=fengwei.yin@intel.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-mm@kvack.org \
--cc=willy@infradead.org \
--cc=yuzhao@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox