nvdimm.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: linux-kernel@vger.kernel.org
Cc: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	nvdimm@lists.linux.dev, David Hildenbrand <david@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Alistair Popple <apopple@nvidia.com>,
	Matthew Wilcox <willy@infradead.org>, Jan Kara <jack@suse.cz>,
	Alexander Viro <viro@zeniv.linux.org.uk>,
	Christian Brauner <brauner@kernel.org>, Zi Yan <ziy@nvidia.com>,
	Baolin Wang <baolin.wang@linux.alibaba.com>,
	Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
	"Liam R. Howlett" <Liam.Howlett@oracle.com>,
	Nico Pache <npache@redhat.com>,
	Ryan Roberts <ryan.roberts@arm.com>, Dev Jain <dev.jain@arm.com>,
	Barry Song <baohua@kernel.org>, Vlastimil Babka <vbabka@suse.cz>,
	Mike Rapoport <rppt@kernel.org>,
	Suren Baghdasaryan <surenb@google.com>,
	Michal Hocko <mhocko@suse.com>, Jann Horn <jannh@google.com>,
	Pedro Falcato <pfalcato@suse.de>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [PATCH RFC 14/14] mm: rename vm_ops->find_special_page() to vm_ops->find_normal_page()
Date: Tue, 17 Jun 2025 17:43:45 +0200	[thread overview]
Message-ID: <20250617154345.2494405-15-david@redhat.com> (raw)
In-Reply-To: <20250617154345.2494405-1-david@redhat.com>

... and hide it behind a kconfig option. There is really no need for
any !xen code to perform this check.

The naming is a bit off: we want to find the "normal" page when a PTE
was marked "special". So it's really not "finding a special" page.

Improve the documentation, and add a comment in the code where XEN ends
up performing the pte_mkspecial() through a hypercall. More details can
be found in commit 923b2919e2c3 ("xen/gntdev: mark userspace PTEs as
special on x86 PV guests").

Cc: David Vrabel <david.vrabel@citrix.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 drivers/xen/Kconfig              |  1 +
 drivers/xen/gntdev.c             |  5 +++--
 include/linux/mm.h               | 18 +++++++++++++-----
 mm/Kconfig                       |  2 ++
 mm/memory.c                      | 10 ++++++++--
 tools/testing/vma/vma_internal.h | 18 +++++++++++++-----
 6 files changed, 40 insertions(+), 14 deletions(-)

diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
index 24f485827e039..f9a35ed266ecf 100644
--- a/drivers/xen/Kconfig
+++ b/drivers/xen/Kconfig
@@ -138,6 +138,7 @@ config XEN_GNTDEV
 	depends on XEN
 	default m
 	select MMU_NOTIFIER
+	select FIND_NORMAL_PAGE
 	help
 	  Allows userspace processes to use grants.
 
diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index 61faea1f06630..d1bc0dae2cdf9 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -309,6 +309,7 @@ static int find_grant_ptes(pte_t *pte, unsigned long addr, void *data)
 	BUG_ON(pgnr >= map->count);
 	pte_maddr = arbitrary_virt_to_machine(pte).maddr;
 
+	/* Note: this will perform a pte_mkspecial() through the hypercall. */
 	gnttab_set_map_op(&map->map_ops[pgnr], pte_maddr, flags,
 			  map->grants[pgnr].ref,
 			  map->grants[pgnr].domid);
@@ -516,7 +517,7 @@ static void gntdev_vma_close(struct vm_area_struct *vma)
 	gntdev_put_map(priv, map);
 }
 
-static struct page *gntdev_vma_find_special_page(struct vm_area_struct *vma,
+static struct page *gntdev_vma_find_normal_page(struct vm_area_struct *vma,
 						 unsigned long addr)
 {
 	struct gntdev_grant_map *map = vma->vm_private_data;
@@ -527,7 +528,7 @@ static struct page *gntdev_vma_find_special_page(struct vm_area_struct *vma,
 static const struct vm_operations_struct gntdev_vmops = {
 	.open = gntdev_vma_open,
 	.close = gntdev_vma_close,
-	.find_special_page = gntdev_vma_find_special_page,
+	.find_normal_page = gntdev_vma_find_normal_page,
 };
 
 /* ------------------------------------------------------------------ */
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 022e8ef2c78ef..b01475f3dca99 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -646,13 +646,21 @@ struct vm_operations_struct {
 	struct mempolicy *(*get_policy)(struct vm_area_struct *vma,
 					unsigned long addr, pgoff_t *ilx);
 #endif
+#ifdef CONFIG_FIND_NORMAL_PAGE
 	/*
-	 * Called by vm_normal_page() for special PTEs to find the
-	 * page for @addr.  This is useful if the default behavior
-	 * (using pte_page()) would not find the correct page.
+	 * Called by vm_normal_page() for special PTEs in @vma at @addr. This
+	 * allows for returning a "normal" page from vm_normal_page() even
+	 * though the PTE indicates that the "struct page" either does not exist
+	 * or should not be touched: "special".
+	 *
+	 * Do not add new users: this really only works when a "normal" page
+	 * was mapped, but then the PTE got changed to something weird (+
+	 * marked special) that would not make pte_pfn() identify the originally
+	 * inserted page.
 	 */
-	struct page *(*find_special_page)(struct vm_area_struct *vma,
-					  unsigned long addr);
+	struct page *(*find_normal_page)(struct vm_area_struct *vma,
+					 unsigned long addr);
+#endif /* CONFIG_FIND_NORMAL_PAGE */
 };
 
 #ifdef CONFIG_NUMA_BALANCING
diff --git a/mm/Kconfig b/mm/Kconfig
index c6194d1f9d170..607a3f9672bdb 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -1390,6 +1390,8 @@ config PT_RECLAIM
 
 	  Note: now only empty user PTE page table pages will be reclaimed.
 
+config FIND_NORMAL_PAGE
+	def_bool n
 
 source "mm/damon/Kconfig"
 
diff --git a/mm/memory.c b/mm/memory.c
index 6c65f51248250..1eba95fcde096 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -619,6 +619,10 @@ static inline struct page *vm_normal_page_pfn(struct vm_area_struct *vma,
  * If an architecture does not support pte_special(), this function is less
  * trivial and more expensive in some cases.
  *
+ * With CONFIG_FIND_NORMAL_PAGE, we might have pte_special() set on PTEs that
+ * actually map "normal" pages: however, that page cannot be looked up through
+ * pte_pfn(), but instead will be looked up through vm_ops->find_normal_page().
+ *
  * A raw VM_PFNMAP mapping (ie. one that is not COWed) is always considered a
  * special mapping (even if there are underlying and valid "struct pages").
  * COWed pages of a VM_PFNMAP are always normal.
@@ -639,8 +643,10 @@ struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
 	unsigned long pfn = pte_pfn(pte);
 
 	if (unlikely(pte_special(pte))) {
-		if (vma->vm_ops && vma->vm_ops->find_special_page)
-			return vma->vm_ops->find_special_page(vma, addr);
+#ifdef CONFIG_FIND_NORMAL_PAGE
+		if (vma->vm_ops && vma->vm_ops->find_normal_page)
+			return vma->vm_ops->find_normal_page(vma, addr);
+#endif /* CONFIG_FIND_NORMAL_PAGE */
 		if (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP))
 			return NULL;
 		if (is_zero_pfn(pfn))
diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_internal.h
index 51dd122b8d501..c5bf041036dd7 100644
--- a/tools/testing/vma/vma_internal.h
+++ b/tools/testing/vma/vma_internal.h
@@ -470,13 +470,21 @@ struct vm_operations_struct {
 	struct mempolicy *(*get_policy)(struct vm_area_struct *vma,
 					unsigned long addr, pgoff_t *ilx);
 #endif
+#ifdef CONFIG_FIND_NORMAL_PAGE
 	/*
-	 * Called by vm_normal_page() for special PTEs to find the
-	 * page for @addr.  This is useful if the default behavior
-	 * (using pte_page()) would not find the correct page.
+	 * Called by vm_normal_page() for special PTEs in @vma at @addr. This
+	 * allows for returning a "normal" page from vm_normal_page() even
+	 * though the PTE indicates that the "struct page" either does not exist
+	 * or should not be touched: "special".
+	 *
+	 * Do not add new users: this really only works when a "normal" page
+	 * was mapped, but then the PTE got changed to something weird (+
+	 * marked special) that would not make pte_pfn() identify the originally
+	 * inserted page.
 	 */
-	struct page *(*find_special_page)(struct vm_area_struct *vma,
-					  unsigned long addr);
+	struct page *(*find_normal_page)(struct vm_area_struct *vma,
+					 unsigned long addr);
+#endif /* CONFIG_FIND_NORMAL_PAGE */
 };
 
 struct vm_unmapped_area_info {
-- 
2.49.0


  parent reply	other threads:[~2025-06-17 15:44 UTC|newest]

Thread overview: 65+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-06-17 15:43 [PATCH RFC 00/14] mm: vm_normal_page*() + CoW PFNMAP improvements David Hildenbrand
2025-06-17 15:43 ` [PATCH RFC 01/14] mm/memory: drop highest_memmap_pfn sanity check in vm_normal_page() David Hildenbrand
2025-06-20 12:50   ` Oscar Salvador
2025-06-23 14:04     ` David Hildenbrand
2025-06-25  7:54       ` Oscar Salvador
2025-07-03 12:34       ` Lance Yang
2025-07-03 12:39         ` David Hildenbrand
2025-07-03 14:44           ` Lance Yang
2025-07-04 12:40             ` David Hildenbrand
2025-07-07  6:31               ` Hugh Dickins
2025-07-07 13:19                 ` David Hildenbrand
2025-07-08  2:52                   ` Hugh Dickins
2025-07-11 15:30                     ` David Hildenbrand
2025-07-11 18:49                       ` Hugh Dickins
2025-07-11 18:57                         ` David Hildenbrand
2025-06-25  7:55   ` Oscar Salvador
2025-07-03 14:50   ` Lance Yang
2025-06-17 15:43 ` [PATCH RFC 02/14] mm: drop highest_memmap_pfn David Hildenbrand
2025-06-20 13:04   ` Oscar Salvador
2025-06-20 18:11   ` Pedro Falcato
2025-06-17 15:43 ` [PATCH RFC 03/14] mm: compare pfns only if the entry is present when inserting pfns/pages David Hildenbrand
2025-06-20 13:27   ` Oscar Salvador
2025-06-23 19:22     ` David Hildenbrand
2025-06-20 18:24   ` Pedro Falcato
2025-06-23 19:19     ` David Hildenbrand
2025-06-17 15:43 ` [PATCH RFC 04/14] mm/huge_memory: move more common code into insert_pmd() David Hildenbrand
2025-06-20 14:12   ` Oscar Salvador
2025-07-07  2:48     ` Alistair Popple
2025-06-17 15:43 ` [PATCH RFC 05/14] mm/huge_memory: move more common code into insert_pud() David Hildenbrand
2025-06-20 14:15   ` Oscar Salvador
2025-07-07  2:51   ` Alistair Popple
2025-06-17 15:43 ` [PATCH RFC 06/14] mm/huge_memory: support huge zero folio in vmf_insert_folio_pmd() David Hildenbrand
2025-06-25  8:15   ` Oscar Salvador
2025-06-25  8:17     ` Oscar Salvador
2025-06-25  8:20   ` Oscar Salvador
2025-06-25  8:59     ` David Hildenbrand
2025-06-17 15:43 ` [PATCH RFC 07/14] fs/dax: use vmf_insert_folio_pmd() to insert the huge zero folio David Hildenbrand
2025-06-24  1:16   ` Alistair Popple
2025-06-25  9:03     ` David Hildenbrand
2025-07-04 13:22       ` David Hildenbrand
2025-07-07 11:50         ` Alistair Popple
2025-06-17 15:43 ` [PATCH RFC 08/14] mm/huge_memory: mark PMD mappings of the huge zero folio special David Hildenbrand
2025-06-25  8:32   ` Oscar Salvador
2025-07-14 12:41     ` David Hildenbrand
2025-06-17 15:43 ` [PATCH RFC 09/14] mm/memory: introduce is_huge_zero_pfn() and use it in vm_normal_page_pmd() David Hildenbrand
2025-06-25  8:37   ` Oscar Salvador
2025-06-17 15:43 ` [PATCH RFC 10/14] mm/memory: factor out common code from vm_normal_page_*() David Hildenbrand
2025-06-25  8:53   ` Oscar Salvador
2025-06-25  8:57     ` David Hildenbrand
2025-06-25  9:20       ` Oscar Salvador
2025-06-25 10:14         ` David Hildenbrand
2025-06-17 15:43 ` [PATCH RFC 11/14] mm: remove "horrible special case to handle copy-on-write behaviour" David Hildenbrand
2025-06-25  8:47   ` David Hildenbrand
2025-06-25  9:02     ` Oscar Salvador
2025-06-25  9:04       ` David Hildenbrand
2025-06-17 15:43 ` [PATCH RFC 12/14] mm: drop addr parameter from vm_normal_*_pmd() David Hildenbrand
2025-06-17 15:43 ` [PATCH RFC 13/14] mm: introduce and use vm_normal_page_pud() David Hildenbrand
2025-06-25  9:22   ` Oscar Salvador
2025-06-17 15:43 ` David Hildenbrand [this message]
2025-06-25  9:34   ` [PATCH RFC 14/14] mm: rename vm_ops->find_special_page() to vm_ops->find_normal_page() Oscar Salvador
2025-07-14 14:19     ` David Hildenbrand
2025-06-17 16:18 ` [PATCH RFC 00/14] mm: vm_normal_page*() + CoW PFNMAP improvements David Hildenbrand
2025-06-17 18:25   ` David Hildenbrand
2025-06-25  8:49 ` Lorenzo Stoakes
2025-06-25  8:55   ` David Hildenbrand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250617154345.2494405-15-david@redhat.com \
    --to=david@redhat.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=apopple@nvidia.com \
    --cc=baohua@kernel.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=brauner@kernel.org \
    --cc=dan.j.williams@intel.com \
    --cc=david.vrabel@citrix.com \
    --cc=dev.jain@arm.com \
    --cc=jack@suse.cz \
    --cc=jannh@google.com \
    --cc=jgross@suse.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=mhocko@suse.com \
    --cc=npache@redhat.com \
    --cc=nvdimm@lists.linux.dev \
    --cc=oleksandr_tyshchenko@epam.com \
    --cc=pfalcato@suse.de \
    --cc=rppt@kernel.org \
    --cc=ryan.roberts@arm.com \
    --cc=sstabellini@kernel.org \
    --cc=surenb@google.com \
    --cc=vbabka@suse.cz \
    --cc=viro@zeniv.linux.org.uk \
    --cc=willy@infradead.org \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).