linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org, linux-doc@vger.kernel.org,
	kvm@vger.kernel.org, linux-s390@vger.kernel.org,
	linux-fsdevel@vger.kernel.org,
	David Hildenbrand <david@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	"Matthew Wilcox (Oracle)" <willy@infradead.org>,
	Jonathan Corbet <corbet@lwn.net>,
	Christian Borntraeger <borntraeger@linux.ibm.com>,
	Janosch Frank <frankja@linux.ibm.com>,
	Claudio Imbrenda <imbrenda@linux.ibm.com>,
	Heiko Carstens <hca@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Alexander Gordeev <agordeev@linux.ibm.com>,
	Sven Schnelle <svens@linux.ibm.com>,
	Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Subject: [PATCH v1 11/11] mm/ksm: convert break_ksm() from walk_page_range_vma() to folio_walk
Date: Fri,  2 Aug 2024 17:55:24 +0200	[thread overview]
Message-ID: <20240802155524.517137-12-david@redhat.com> (raw)
In-Reply-To: <20240802155524.517137-1-david@redhat.com>

Let's simplify by reusing folio_walk. Keep the existing behavior by
handling migration entries and zeropages.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 mm/ksm.c | 63 ++++++++++++++------------------------------------------
 1 file changed, 16 insertions(+), 47 deletions(-)

diff --git a/mm/ksm.c b/mm/ksm.c
index 0f5b2bba4ef0..8e53666bc7b0 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -608,47 +608,6 @@ static inline bool ksm_test_exit(struct mm_struct *mm)
 	return atomic_read(&mm->mm_users) == 0;
 }
 
-static int break_ksm_pmd_entry(pmd_t *pmd, unsigned long addr, unsigned long next,
-			struct mm_walk *walk)
-{
-	struct page *page = NULL;
-	spinlock_t *ptl;
-	pte_t *pte;
-	pte_t ptent;
-	int ret;
-
-	pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
-	if (!pte)
-		return 0;
-	ptent = ptep_get(pte);
-	if (pte_present(ptent)) {
-		page = vm_normal_page(walk->vma, addr, ptent);
-	} else if (!pte_none(ptent)) {
-		swp_entry_t entry = pte_to_swp_entry(ptent);
-
-		/*
-		 * As KSM pages remain KSM pages until freed, no need to wait
-		 * here for migration to end.
-		 */
-		if (is_migration_entry(entry))
-			page = pfn_swap_entry_to_page(entry);
-	}
-	/* return 1 if the page is an normal ksm page or KSM-placed zero page */
-	ret = (page && PageKsm(page)) || is_ksm_zero_pte(ptent);
-	pte_unmap_unlock(pte, ptl);
-	return ret;
-}
-
-static const struct mm_walk_ops break_ksm_ops = {
-	.pmd_entry = break_ksm_pmd_entry,
-	.walk_lock = PGWALK_RDLOCK,
-};
-
-static const struct mm_walk_ops break_ksm_lock_vma_ops = {
-	.pmd_entry = break_ksm_pmd_entry,
-	.walk_lock = PGWALK_WRLOCK,
-};
-
 /*
  * We use break_ksm to break COW on a ksm page by triggering unsharing,
  * such that the ksm page will get replaced by an exclusive anonymous page.
@@ -665,16 +624,26 @@ static const struct mm_walk_ops break_ksm_lock_vma_ops = {
 static int break_ksm(struct vm_area_struct *vma, unsigned long addr, bool lock_vma)
 {
 	vm_fault_t ret = 0;
-	const struct mm_walk_ops *ops = lock_vma ?
-				&break_ksm_lock_vma_ops : &break_ksm_ops;
+
+	if (lock_vma)
+		vma_start_write(vma);
 
 	do {
-		int ksm_page;
+		bool ksm_page = false;
+		struct folio_walk fw;
+		struct folio *folio;
 
 		cond_resched();
-		ksm_page = walk_page_range_vma(vma, addr, addr + 1, ops, NULL);
-		if (WARN_ON_ONCE(ksm_page < 0))
-			return ksm_page;
+		folio = folio_walk_start(&fw, vma, addr,
+					 FW_MIGRATION | FW_ZEROPAGE);
+		if (folio) {
+			/* Small folio implies FW_LEVEL_PTE. */
+			if (!folio_test_large(folio) &&
+			    (folio_test_ksm(folio) || is_ksm_zero_pte(fw.pte)))
+				ksm_page = true;
+			folio_walk_end(&fw, vma);
+		}
+
 		if (!ksm_page)
 			return 0;
 		ret = handle_mm_fault(vma, addr,
-- 
2.45.2


  parent reply	other threads:[~2024-08-02 15:56 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-08-02 15:55 [PATCH v1 00/11] mm: replace follow_page() by folio_walk David Hildenbrand
2024-08-02 15:55 ` [PATCH v1 01/11] mm: provide vm_normal_(page|folio)_pmd() with CONFIG_PGTABLE_HAS_HUGE_LEAVES David Hildenbrand
2024-08-02 15:55 ` [PATCH v1 02/11] mm/pagewalk: introduce folio_walk_start() + folio_walk_end() David Hildenbrand
2024-08-07  9:17   ` Claudio Imbrenda
2024-08-07  9:31     ` David Hildenbrand
2024-08-02 15:55 ` [PATCH v1 03/11] mm/migrate: convert do_pages_stat_array() from follow_page() to folio_walk David Hildenbrand
2024-08-02 15:55 ` [PATCH v1 04/11] mm/migrate: convert add_page_for_migration() " David Hildenbrand
2024-08-02 15:55 ` [PATCH v1 05/11] mm/ksm: convert get_mergeable_page() " David Hildenbrand
2024-08-02 15:55 ` [PATCH v1 06/11] mm/ksm: convert scan_get_next_rmap_item() " David Hildenbrand
2024-08-02 15:55 ` [PATCH v1 07/11] mm/huge_memory: convert split_huge_pages_pid() " David Hildenbrand
2024-08-06  9:46   ` Ryan Roberts
2024-08-06  9:56     ` David Hildenbrand
2024-08-06 10:03       ` David Hildenbrand
2024-08-06 10:24         ` David Hildenbrand
2024-08-06 11:17           ` Ryan Roberts
2024-08-06 15:36           ` Zi Yan
2024-08-07  9:57             ` David Hildenbrand
2024-08-07 14:45               ` Zi Yan
2024-08-07 14:52                 ` David Hildenbrand
2024-08-15 10:04   ` Pankaj Raghav
2024-08-15 10:20     ` David Hildenbrand
2024-08-15 13:43       ` Pankaj Raghav (Samsung)
2024-08-02 15:55 ` [PATCH v1 08/11] s390/uv: convert gmap_destroy_page() " David Hildenbrand
2024-08-07  8:59   ` Claudio Imbrenda
2024-08-02 15:55 ` [PATCH v1 09/11] s390/mm/fault: convert do_secure_storage_access() " David Hildenbrand
2024-08-07  8:59   ` Claudio Imbrenda
2024-08-02 15:55 ` [PATCH v1 10/11] mm: remove follow_page() David Hildenbrand
2024-08-02 15:55 ` David Hildenbrand [this message]
2024-08-03  5:34 ` [PATCH v1 00/11] mm: replace follow_page() by folio_walk Andrew Morton
2024-08-06 13:42 ` Claudio Imbrenda
2024-08-07  9:15 ` Claudio Imbrenda
2024-08-07  9:33   ` David Hildenbrand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240802155524.517137-12-david@redhat.com \
    --to=david@redhat.com \
    --cc=agordeev@linux.ibm.com \
    --cc=akpm@linux-foundation.org \
    --cc=borntraeger@linux.ibm.com \
    --cc=corbet@lwn.net \
    --cc=frankja@linux.ibm.com \
    --cc=gerald.schaefer@linux.ibm.com \
    --cc=gor@linux.ibm.com \
    --cc=hca@linux.ibm.com \
    --cc=imbrenda@linux.ibm.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-s390@vger.kernel.org \
    --cc=svens@linux.ibm.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).