From: Peter Xu <peterx@redhat.com>
To: linux-mm@kvack.org, linux-kernel@vger.kernel.org
Cc: Sean Christopherson <seanjc@google.com>,
Oscar Salvador <osalvador@suse.de>,
Jason Gunthorpe <jgg@nvidia.com>,
Axel Rasmussen <axelrasmussen@google.com>,
linux-arm-kernel@lists.infradead.org, x86@kernel.org,
peterx@redhat.com, Will Deacon <will@kernel.org>,
Gavin Shan <gshan@redhat.com>,
Paolo Bonzini <pbonzini@redhat.com>, Zi Yan <ziy@nvidia.com>,
Andrew Morton <akpm@linux-foundation.org>,
Catalin Marinas <catalin.marinas@arm.com>,
Ingo Molnar <mingo@redhat.com>,
Alistair Popple <apopple@nvidia.com>,
Borislav Petkov <bp@alien8.de>,
David Hildenbrand <david@redhat.com>,
Thomas Gleixner <tglx@linutronix.de>,
kvm@vger.kernel.org, Dave Hansen <dave.hansen@linux.intel.com>,
Alex Williamson <alex.williamson@redhat.com>,
Yan Zhao <yan.y.zhao@intel.com>
Subject: [PATCH 09/19] mm: New follow_pfnmap API
Date: Fri, 9 Aug 2024 12:08:59 -0400 [thread overview]
Message-ID: <20240809160909.1023470-10-peterx@redhat.com> (raw)
In-Reply-To: <20240809160909.1023470-1-peterx@redhat.com>
Introduce a pair of APIs to follow pfn mappings to get entry information.
It's very similar to what follow_pte() does before, but different in that
it recognizes huge pfn mappings.
Signed-off-by: Peter Xu <peterx@redhat.com>
---
include/linux/mm.h | 31 ++++++++++
mm/memory.c | 147 +++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 178 insertions(+)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 90ca84200800..7471302658af 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2374,6 +2374,37 @@ int follow_pte(struct vm_area_struct *vma, unsigned long address,
int generic_access_phys(struct vm_area_struct *vma, unsigned long addr,
void *buf, int len, int write);
+struct follow_pfnmap_args {
+ /**
+ * Inputs:
+ * @vma: Pointer to @vm_area_struct struct
+ * @address: the virtual address to walk
+ */
+ struct vm_area_struct *vma;
+ unsigned long address;
+ /**
+ * Internals:
+ *
+ * The caller shouldn't touch any of these.
+ */
+ spinlock_t *lock;
+ pte_t *ptep;
+ /**
+ * Outputs:
+ *
+ * @pfn: the PFN of the address
+ * @pgprot: the pgprot_t of the mapping
+ * @writable: whether the mapping is writable
+ * @special: whether the mapping is a special mapping (real PFN maps)
+ */
+ unsigned long pfn;
+ pgprot_t pgprot;
+ bool writable;
+ bool special;
+};
+int follow_pfnmap_start(struct follow_pfnmap_args *args);
+void follow_pfnmap_end(struct follow_pfnmap_args *args);
+
extern void truncate_pagecache(struct inode *inode, loff_t new);
extern void truncate_setsize(struct inode *inode, loff_t newsize);
void pagecache_isize_extended(struct inode *inode, loff_t from, loff_t to);
diff --git a/mm/memory.c b/mm/memory.c
index 67496dc5064f..2194e0f9f541 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -6338,6 +6338,153 @@ int follow_pte(struct vm_area_struct *vma, unsigned long address,
}
EXPORT_SYMBOL_GPL(follow_pte);
+static inline void pfnmap_args_setup(struct follow_pfnmap_args *args,
+ spinlock_t *lock, pte_t *ptep,
+ pgprot_t pgprot, unsigned long pfn_base,
+ unsigned long addr_mask, bool writable,
+ bool special)
+{
+ args->lock = lock;
+ args->ptep = ptep;
+ args->pfn = pfn_base + ((args->address & ~addr_mask) >> PAGE_SHIFT);
+ args->pgprot = pgprot;
+ args->writable = writable;
+ args->special = special;
+}
+
+static inline void pfnmap_lockdep_assert(struct vm_area_struct *vma)
+{
+#ifdef CONFIG_LOCKDEP
+ struct address_space *mapping = vma->vm_file->f_mapping;
+
+ if (mapping)
+ lockdep_assert(lockdep_is_held(&vma->vm_file->f_mapping->i_mmap_rwsem) ||
+ lockdep_is_held(&vma->vm_mm->mmap_lock));
+ else
+ lockdep_assert(lockdep_is_held(&vma->vm_mm->mmap_lock));
+#endif
+}
+
+/**
+ * follow_pfnmap_start() - Look up a pfn mapping at a user virtual address
+ * @args: Pointer to struct @follow_pfnmap_args
+ *
+ * The caller needs to setup args->vma and args->address to point to the
+ * virtual address as the target of such lookup. On a successful return,
+ * the results will be put into other output fields.
+ *
+ * After the caller finished using the fields, the caller must invoke
+ * another follow_pfnmap_end() to proper releases the locks and resources
+ * of such look up request.
+ *
+ * During the start() and end() calls, the results in @args will be valid
+ * as proper locks will be held. After the end() is called, all the fields
+ * in @follow_pfnmap_args will be invalid to be further accessed.
+ *
+ * If the PTE maps a refcounted page, callers are responsible to protect
+ * against invalidation with MMU notifiers; otherwise access to the PFN at
+ * a later point in time can trigger use-after-free.
+ *
+ * Only IO mappings and raw PFN mappings are allowed. The mmap semaphore
+ * should be taken for read, and the mmap semaphore cannot be released
+ * before the end() is invoked.
+ *
+ * This function must not be used to modify PTE content.
+ *
+ * Return: zero on success, -ve otherwise.
+ */
+int follow_pfnmap_start(struct follow_pfnmap_args *args)
+{
+ struct vm_area_struct *vma = args->vma;
+ unsigned long address = args->address;
+ struct mm_struct *mm = vma->vm_mm;
+ spinlock_t *lock;
+ pgd_t *pgdp;
+ p4d_t *p4dp, p4d;
+ pud_t *pudp, pud;
+ pmd_t *pmdp, pmd;
+ pte_t *ptep, pte;
+
+ pfnmap_lockdep_assert(vma);
+
+ if (unlikely(address < vma->vm_start || address >= vma->vm_end))
+ goto out;
+
+ if (!(vma->vm_flags & (VM_IO | VM_PFNMAP)))
+ goto out;
+retry:
+ pgdp = pgd_offset(mm, address);
+ if (pgd_none(*pgdp) || unlikely(pgd_bad(*pgdp)))
+ goto out;
+
+ p4dp = p4d_offset(pgdp, address);
+ p4d = READ_ONCE(*p4dp);
+ if (p4d_none(p4d) || unlikely(p4d_bad(p4d)))
+ goto out;
+
+ pudp = pud_offset(p4dp, address);
+ pud = READ_ONCE(*pudp);
+ if (pud_none(pud))
+ goto out;
+ if (pud_leaf(pud)) {
+ lock = pud_lock(mm, pudp);
+ if (!unlikely(pud_leaf(pud))) {
+ spin_unlock(lock);
+ goto retry;
+ }
+ pfnmap_args_setup(args, lock, NULL, pud_pgprot(pud),
+ pud_pfn(pud), PUD_MASK, pud_write(pud),
+ pud_special(pud));
+ return 0;
+ }
+
+ pmdp = pmd_offset(pudp, address);
+ pmd = pmdp_get_lockless(pmdp);
+ if (pmd_leaf(pmd)) {
+ lock = pmd_lock(mm, pmdp);
+ if (!unlikely(pmd_leaf(pmd))) {
+ spin_unlock(lock);
+ goto retry;
+ }
+ pfnmap_args_setup(args, lock, NULL, pmd_pgprot(pmd),
+ pmd_pfn(pmd), PMD_MASK, pmd_write(pmd),
+ pmd_special(pmd));
+ return 0;
+ }
+
+ ptep = pte_offset_map_lock(mm, pmdp, address, &lock);
+ if (!ptep)
+ goto out;
+ pte = ptep_get(ptep);
+ if (!pte_present(pte))
+ goto unlock;
+ pfnmap_args_setup(args, lock, ptep, pte_pgprot(pte),
+ pte_pfn(pte), PAGE_MASK, pte_write(pte),
+ pte_special(pte));
+ return 0;
+unlock:
+ pte_unmap_unlock(ptep, lock);
+out:
+ return -EINVAL;
+}
+EXPORT_SYMBOL_GPL(follow_pfnmap_start);
+
+/**
+ * follow_pfnmap_end(): End a follow_pfnmap_start() process
+ * @args: Pointer to struct @follow_pfnmap_args
+ *
+ * Must be used in pair of follow_pfnmap_start(). See the start() function
+ * above for more information.
+ */
+void follow_pfnmap_end(struct follow_pfnmap_args *args)
+{
+ if (args->lock)
+ spin_unlock(args->lock);
+ if (args->ptep)
+ pte_unmap(args->ptep);
+}
+EXPORT_SYMBOL_GPL(follow_pfnmap_end);
+
#ifdef CONFIG_HAVE_IOREMAP_PROT
/**
* generic_access_phys - generic implementation for iomem mmap access
--
2.45.0
next prev parent reply other threads:[~2024-08-09 16:09 UTC|newest]
Thread overview: 90+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-09 16:08 [PATCH 00/19] mm: Support huge pfnmaps Peter Xu
2024-08-09 16:08 ` [PATCH 01/19] mm: Introduce ARCH_SUPPORTS_HUGE_PFNMAP and special bits to pmd/pud Peter Xu
2024-08-09 16:34 ` David Hildenbrand
2024-08-09 17:16 ` Peter Xu
2024-08-09 18:06 ` David Hildenbrand
2024-08-09 16:08 ` [PATCH 02/19] mm: Drop is_huge_zero_pud() Peter Xu
2024-08-09 16:34 ` David Hildenbrand
2024-08-14 12:38 ` Jason Gunthorpe
2024-08-09 16:08 ` [PATCH 03/19] mm: Mark special bits for huge pfn mappings when inject Peter Xu
2024-08-14 12:40 ` Jason Gunthorpe
2024-08-14 15:23 ` Peter Xu
2024-08-14 15:53 ` Jason Gunthorpe
2024-08-09 16:08 ` [PATCH 04/19] mm: Allow THP orders for PFNMAPs Peter Xu
2024-08-14 12:40 ` Jason Gunthorpe
2024-08-09 16:08 ` [PATCH 05/19] mm/gup: Detect huge pfnmap entries in gup-fast Peter Xu
2024-08-09 16:23 ` David Hildenbrand
2024-08-09 16:59 ` Peter Xu
2024-08-14 12:42 ` Jason Gunthorpe
2024-08-14 15:34 ` Peter Xu
2024-08-14 12:41 ` Jason Gunthorpe
2024-08-09 16:08 ` [PATCH 07/19] mm/fork: Accept huge pfnmap entries Peter Xu
2024-08-09 16:32 ` David Hildenbrand
2024-08-09 17:15 ` Peter Xu
2024-08-09 17:59 ` David Hildenbrand
2024-08-12 18:29 ` Peter Xu
2024-08-12 18:50 ` David Hildenbrand
2024-08-12 19:05 ` Peter Xu
2024-08-09 16:08 ` [PATCH 08/19] mm: Always define pxx_pgprot() Peter Xu
2024-08-14 13:09 ` Jason Gunthorpe
2024-08-14 15:43 ` Peter Xu
2024-08-09 16:08 ` Peter Xu [this message]
2024-08-14 13:19 ` [PATCH 09/19] mm: New follow_pfnmap API Jason Gunthorpe
2024-08-14 18:24 ` Peter Xu
2024-08-14 22:14 ` Jason Gunthorpe
2024-08-15 15:41 ` Peter Xu
2024-08-15 16:16 ` Jason Gunthorpe
2024-08-15 17:21 ` Peter Xu
2024-08-15 17:24 ` Jason Gunthorpe
2024-08-15 18:52 ` Peter Xu
2024-08-16 23:12 ` Sean Christopherson
2024-08-17 11:05 ` David Hildenbrand
2024-08-21 19:10 ` Peter Xu
2024-08-09 16:09 ` [PATCH 10/19] KVM: Use " Peter Xu
2024-08-09 17:23 ` Axel Rasmussen
2024-08-12 18:58 ` Peter Xu
2024-08-12 22:47 ` Axel Rasmussen
2024-08-12 23:44 ` Sean Christopherson
2024-08-14 13:15 ` Jason Gunthorpe
2024-08-14 14:23 ` Sean Christopherson
2024-08-09 16:09 ` [PATCH 11/19] s390/pci_mmio: " Peter Xu
2024-08-09 16:09 ` [PATCH 12/19] mm/x86/pat: Use the new " Peter Xu
2024-08-09 16:09 ` [PATCH 13/19] vfio: " Peter Xu
2024-08-14 13:20 ` Jason Gunthorpe
2024-08-09 16:09 ` [PATCH 14/19] acrn: " Peter Xu
2024-08-09 16:09 ` [PATCH 15/19] mm/access_process_vm: " Peter Xu
2024-08-09 16:09 ` [PATCH 16/19] mm: Remove follow_pte() Peter Xu
2024-08-09 16:09 ` [PATCH 17/19] mm/x86: Support large pfn mappings Peter Xu
2024-08-09 16:09 ` [PATCH 18/19] mm/arm64: " Peter Xu
2024-08-09 16:09 ` [PATCH 19/19] vfio/pci: Implement huge_fault support Peter Xu
2024-08-14 13:25 ` Jason Gunthorpe
2024-08-14 16:08 ` Alex Williamson
2024-08-14 16:24 ` Jason Gunthorpe
[not found] ` <20240809160909.1023470-7-peterx@redhat.com>
2024-08-09 16:20 ` [PATCH 06/19] mm/pagewalk: Check pfnmap early for folio_walk_start() David Hildenbrand
2024-08-09 16:54 ` Peter Xu
2024-08-09 17:25 ` David Hildenbrand
2024-08-09 21:37 ` Peter Xu
2024-08-14 13:05 ` Jason Gunthorpe
2024-08-16 9:30 ` David Hildenbrand
2024-08-16 14:21 ` Peter Xu
2024-08-16 17:38 ` Jason Gunthorpe
2024-08-21 18:42 ` Peter Xu
2024-08-16 17:56 ` David Hildenbrand
2024-08-19 12:19 ` Jason Gunthorpe
2024-08-19 14:19 ` Sean Christopherson
2024-08-09 18:12 ` [PATCH 00/19] mm: Support huge pfnmaps David Hildenbrand
2024-08-14 12:37 ` Jason Gunthorpe
2024-08-14 14:35 ` Sean Christopherson
2024-08-14 14:42 ` Paolo Bonzini
2024-08-14 14:43 ` Jason Gunthorpe
2024-08-14 20:54 ` Sean Christopherson
2024-08-14 22:00 ` Sean Christopherson
2024-08-14 22:10 ` Jason Gunthorpe
2024-08-14 23:36 ` Oliver Upton
2024-08-14 23:27 ` Oliver Upton
2024-08-14 23:38 ` Oliver Upton
2024-08-15 0:23 ` Sean Christopherson
2024-08-15 19:20 ` Peter Xu
2024-08-16 3:05 ` Kefeng Wang
2024-08-16 14:33 ` Peter Xu
2024-08-19 13:14 ` Kefeng Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240809160909.1023470-10-peterx@redhat.com \
--to=peterx@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=alex.williamson@redhat.com \
--cc=apopple@nvidia.com \
--cc=axelrasmussen@google.com \
--cc=bp@alien8.de \
--cc=catalin.marinas@arm.com \
--cc=dave.hansen@linux.intel.com \
--cc=david@redhat.com \
--cc=gshan@redhat.com \
--cc=jgg@nvidia.com \
--cc=kvm@vger.kernel.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mingo@redhat.com \
--cc=osalvador@suse.de \
--cc=pbonzini@redhat.com \
--cc=seanjc@google.com \
--cc=tglx@linutronix.de \
--cc=will@kernel.org \
--cc=x86@kernel.org \
--cc=yan.y.zhao@intel.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).