linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v1 00/11] mm: replace follow_page() by folio_walk
@ 2024-08-02 15:55 David Hildenbrand
  2024-08-02 15:55 ` [PATCH v1 01/11] mm: provide vm_normal_(page|folio)_pmd() with CONFIG_PGTABLE_HAS_HUGE_LEAVES David Hildenbrand
                   ` (13 more replies)
  0 siblings, 14 replies; 32+ messages in thread
From: David Hildenbrand @ 2024-08-02 15:55 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, linux-doc, kvm, linux-s390, linux-fsdevel,
	David Hildenbrand, Andrew Morton, Matthew Wilcox (Oracle),
	Jonathan Corbet, Christian Borntraeger, Janosch Frank,
	Claudio Imbrenda, Heiko Carstens, Vasily Gorbik,
	Alexander Gordeev, Sven Schnelle, Gerald Schaefer

Looking into a way of moving the last folio_likely_mapped_shared() call
in add_folio_for_migration() under the PTL, I found myself removing
follow_page(). This paves the way for cleaning up all the FOLL_, follow_*
terminology to just be called "GUP" nowadays.

The new page table walker will lookup a mapped folio and return to the
caller with the PTL held, such that the folio cannot get unmapped
concurrently. Callers can then conditionally decide whether they really
want to take a short-term folio reference or whether the can simply
unlock the PTL and be done with it.

folio_walk is similar to page_vma_mapped_walk(), except that we don't know
the folio we want to walk to and that we are only walking to exactly one
PTE/PMD/PUD.

folio_walk provides access to the pte/pmd/pud (and the referenced folio
page because things like KSM need that), however, as part of this series
no page table modifications are performed by users.

We might be able to convert some other walk_page_range() users that really
only walk to one address, such as DAMON with
damon_mkold_ops/damon_young_ops. It might make sense to extend folio_walk
in the future to optionally fault in a folio (if applicable), such that we
can replace some get_user_pages() users that really only want to lookup
a single page/folio under PTL without unconditionally grabbing a folio
reference.

I have plans to extend the approach to a range walker that will try
batching various page table entries (not just folio pages) to be a better
replace for walk_page_range() -- and users will be able to opt in which
type of page table entries they want to process -- but that will require
more work and more thoughts.

KSM seems to work just fine (ksm_functional_tests selftests) and
move_pages seems to work (migration selftest). I tested the leaf
implementation excessively using various hugetlb sizes (64K, 2M, 32M, 1G)
on arm64 using move_pages and did some more testing on x86-64. Cross
compiled on a bunch of architectures.

I am not able to test the s390x Secure Execution changes, unfortunately.

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Christian Borntraeger <borntraeger@linux.ibm.com>
Cc: Janosch Frank <frankja@linux.ibm.com>
Cc: Claudio Imbrenda <imbrenda@linux.ibm.com>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Cc: Alexander Gordeev <agordeev@linux.ibm.com>
Cc: Sven Schnelle <svens@linux.ibm.com>
Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com>

David Hildenbrand (11):
  mm: provide vm_normal_(page|folio)_pmd() with
    CONFIG_PGTABLE_HAS_HUGE_LEAVES
  mm/pagewalk: introduce folio_walk_start() + folio_walk_end()
  mm/migrate: convert do_pages_stat_array() from follow_page() to
    folio_walk
  mm/migrate: convert add_page_for_migration() from follow_page() to
    folio_walk
  mm/ksm: convert get_mergeable_page() from follow_page() to folio_walk
  mm/ksm: convert scan_get_next_rmap_item() from follow_page() to
    folio_walk
  mm/huge_memory: convert split_huge_pages_pid() from follow_page() to
    folio_walk
  s390/uv: convert gmap_destroy_page() from follow_page() to folio_walk
  s390/mm/fault: convert do_secure_storage_access() from follow_page()
    to folio_walk
  mm: remove follow_page()
  mm/ksm: convert break_ksm() from walk_page_range_vma() to folio_walk

 Documentation/mm/transhuge.rst |   6 +-
 arch/s390/kernel/uv.c          |  18 ++-
 arch/s390/mm/fault.c           |  16 ++-
 include/linux/mm.h             |   3 -
 include/linux/pagewalk.h       |  58 ++++++++++
 mm/filemap.c                   |   2 +-
 mm/gup.c                       |  24 +---
 mm/huge_memory.c               |  18 +--
 mm/ksm.c                       | 127 +++++++++------------
 mm/memory.c                    |   2 +-
 mm/migrate.c                   | 131 ++++++++++-----------
 mm/nommu.c                     |   6 -
 mm/pagewalk.c                  | 202 +++++++++++++++++++++++++++++++++
 13 files changed, 413 insertions(+), 200 deletions(-)

-- 
2.45.2


^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2024-08-15 13:43 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-08-02 15:55 [PATCH v1 00/11] mm: replace follow_page() by folio_walk David Hildenbrand
2024-08-02 15:55 ` [PATCH v1 01/11] mm: provide vm_normal_(page|folio)_pmd() with CONFIG_PGTABLE_HAS_HUGE_LEAVES David Hildenbrand
2024-08-02 15:55 ` [PATCH v1 02/11] mm/pagewalk: introduce folio_walk_start() + folio_walk_end() David Hildenbrand
2024-08-07  9:17   ` Claudio Imbrenda
2024-08-07  9:31     ` David Hildenbrand
2024-08-02 15:55 ` [PATCH v1 03/11] mm/migrate: convert do_pages_stat_array() from follow_page() to folio_walk David Hildenbrand
2024-08-02 15:55 ` [PATCH v1 04/11] mm/migrate: convert add_page_for_migration() " David Hildenbrand
2024-08-02 15:55 ` [PATCH v1 05/11] mm/ksm: convert get_mergeable_page() " David Hildenbrand
2024-08-02 15:55 ` [PATCH v1 06/11] mm/ksm: convert scan_get_next_rmap_item() " David Hildenbrand
2024-08-02 15:55 ` [PATCH v1 07/11] mm/huge_memory: convert split_huge_pages_pid() " David Hildenbrand
2024-08-06  9:46   ` Ryan Roberts
2024-08-06  9:56     ` David Hildenbrand
2024-08-06 10:03       ` David Hildenbrand
2024-08-06 10:24         ` David Hildenbrand
2024-08-06 11:17           ` Ryan Roberts
2024-08-06 15:36           ` Zi Yan
2024-08-07  9:57             ` David Hildenbrand
2024-08-07 14:45               ` Zi Yan
2024-08-07 14:52                 ` David Hildenbrand
2024-08-15 10:04   ` Pankaj Raghav
2024-08-15 10:20     ` David Hildenbrand
2024-08-15 13:43       ` Pankaj Raghav (Samsung)
2024-08-02 15:55 ` [PATCH v1 08/11] s390/uv: convert gmap_destroy_page() " David Hildenbrand
2024-08-07  8:59   ` Claudio Imbrenda
2024-08-02 15:55 ` [PATCH v1 09/11] s390/mm/fault: convert do_secure_storage_access() " David Hildenbrand
2024-08-07  8:59   ` Claudio Imbrenda
2024-08-02 15:55 ` [PATCH v1 10/11] mm: remove follow_page() David Hildenbrand
2024-08-02 15:55 ` [PATCH v1 11/11] mm/ksm: convert break_ksm() from walk_page_range_vma() to folio_walk David Hildenbrand
2024-08-03  5:34 ` [PATCH v1 00/11] mm: replace follow_page() by folio_walk Andrew Morton
2024-08-06 13:42 ` Claudio Imbrenda
2024-08-07  9:15 ` Claudio Imbrenda
2024-08-07  9:33   ` David Hildenbrand

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).