From: Peter Xu <peterx@redhat.com>
To: linux-mm@kvack.org, linux-kernel@vger.kernel.org
Cc: Sean Christopherson <seanjc@google.com>,
Oscar Salvador <osalvador@suse.de>,
Jason Gunthorpe <jgg@nvidia.com>,
Axel Rasmussen <axelrasmussen@google.com>,
linux-arm-kernel@lists.infradead.org, x86@kernel.org,
peterx@redhat.com, Will Deacon <will@kernel.org>,
Gavin Shan <gshan@redhat.com>,
Paolo Bonzini <pbonzini@redhat.com>, Zi Yan <ziy@nvidia.com>,
Andrew Morton <akpm@linux-foundation.org>,
Catalin Marinas <catalin.marinas@arm.com>,
Ingo Molnar <mingo@redhat.com>,
Alistair Popple <apopple@nvidia.com>,
Borislav Petkov <bp@alien8.de>,
David Hildenbrand <david@redhat.com>,
Thomas Gleixner <tglx@linutronix.de>,
kvm@vger.kernel.org, Dave Hansen <dave.hansen@linux.intel.com>,
Alex Williamson <alex.williamson@redhat.com>,
Yan Zhao <yan.y.zhao@intel.com>
Subject: [PATCH 16/19] mm: Remove follow_pte()
Date: Fri, 9 Aug 2024 12:09:06 -0400 [thread overview]
Message-ID: <20240809160909.1023470-17-peterx@redhat.com> (raw)
In-Reply-To: <20240809160909.1023470-1-peterx@redhat.com>
follow_pte() users have been converted to follow_pfnmap*(). Remove the
API.
Signed-off-by: Peter Xu <peterx@redhat.com>
---
include/linux/mm.h | 2 --
mm/memory.c | 73 ----------------------------------------------
2 files changed, 75 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 7471302658af..c5949b8052c6 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2369,8 +2369,6 @@ void free_pgd_range(struct mmu_gather *tlb, unsigned long addr,
unsigned long end, unsigned long floor, unsigned long ceiling);
int
copy_page_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma);
-int follow_pte(struct vm_area_struct *vma, unsigned long address,
- pte_t **ptepp, spinlock_t **ptlp);
int generic_access_phys(struct vm_area_struct *vma, unsigned long addr,
void *buf, int len, int write);
diff --git a/mm/memory.c b/mm/memory.c
index 313c17eedf56..72f61fffdda2 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -6265,79 +6265,6 @@ int __pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long address)
}
#endif /* __PAGETABLE_PMD_FOLDED */
-/**
- * follow_pte - look up PTE at a user virtual address
- * @vma: the memory mapping
- * @address: user virtual address
- * @ptepp: location to store found PTE
- * @ptlp: location to store the lock for the PTE
- *
- * On a successful return, the pointer to the PTE is stored in @ptepp;
- * the corresponding lock is taken and its location is stored in @ptlp.
- *
- * The contents of the PTE are only stable until @ptlp is released using
- * pte_unmap_unlock(). This function will fail if the PTE is non-present.
- * Present PTEs may include PTEs that map refcounted pages, such as
- * anonymous folios in COW mappings.
- *
- * Callers must be careful when relying on PTE content after
- * pte_unmap_unlock(). Especially if the PTE maps a refcounted page,
- * callers must protect against invalidation with MMU notifiers; otherwise
- * access to the PFN at a later point in time can trigger use-after-free.
- *
- * Only IO mappings and raw PFN mappings are allowed. The mmap semaphore
- * should be taken for read.
- *
- * This function must not be used to modify PTE content.
- *
- * Return: zero on success, -ve otherwise.
- */
-int follow_pte(struct vm_area_struct *vma, unsigned long address,
- pte_t **ptepp, spinlock_t **ptlp)
-{
- struct mm_struct *mm = vma->vm_mm;
- pgd_t *pgd;
- p4d_t *p4d;
- pud_t *pud;
- pmd_t *pmd;
- pte_t *ptep;
-
- mmap_assert_locked(mm);
- if (unlikely(address < vma->vm_start || address >= vma->vm_end))
- goto out;
-
- if (!(vma->vm_flags & (VM_IO | VM_PFNMAP)))
- goto out;
-
- pgd = pgd_offset(mm, address);
- if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd)))
- goto out;
-
- p4d = p4d_offset(pgd, address);
- if (p4d_none(*p4d) || unlikely(p4d_bad(*p4d)))
- goto out;
-
- pud = pud_offset(p4d, address);
- if (pud_none(*pud) || unlikely(pud_bad(*pud)))
- goto out;
-
- pmd = pmd_offset(pud, address);
- VM_BUG_ON(pmd_trans_huge(*pmd));
-
- ptep = pte_offset_map_lock(mm, pmd, address, ptlp);
- if (!ptep)
- goto out;
- if (!pte_present(ptep_get(ptep)))
- goto unlock;
- *ptepp = ptep;
- return 0;
-unlock:
- pte_unmap_unlock(ptep, *ptlp);
-out:
- return -EINVAL;
-}
-EXPORT_SYMBOL_GPL(follow_pte);
-
static inline void pfnmap_args_setup(struct follow_pfnmap_args *args,
spinlock_t *lock, pte_t *ptep,
pgprot_t pgprot, unsigned long pfn_base,
--
2.45.0
next prev parent reply other threads:[~2024-08-09 16:10 UTC|newest]
Thread overview: 90+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-09 16:08 [PATCH 00/19] mm: Support huge pfnmaps Peter Xu
2024-08-09 16:08 ` [PATCH 01/19] mm: Introduce ARCH_SUPPORTS_HUGE_PFNMAP and special bits to pmd/pud Peter Xu
2024-08-09 16:34 ` David Hildenbrand
2024-08-09 17:16 ` Peter Xu
2024-08-09 18:06 ` David Hildenbrand
2024-08-09 16:08 ` [PATCH 02/19] mm: Drop is_huge_zero_pud() Peter Xu
2024-08-09 16:34 ` David Hildenbrand
2024-08-14 12:38 ` Jason Gunthorpe
2024-08-09 16:08 ` [PATCH 03/19] mm: Mark special bits for huge pfn mappings when inject Peter Xu
2024-08-14 12:40 ` Jason Gunthorpe
2024-08-14 15:23 ` Peter Xu
2024-08-14 15:53 ` Jason Gunthorpe
2024-08-09 16:08 ` [PATCH 04/19] mm: Allow THP orders for PFNMAPs Peter Xu
2024-08-14 12:40 ` Jason Gunthorpe
2024-08-09 16:08 ` [PATCH 05/19] mm/gup: Detect huge pfnmap entries in gup-fast Peter Xu
2024-08-09 16:23 ` David Hildenbrand
2024-08-09 16:59 ` Peter Xu
2024-08-14 12:42 ` Jason Gunthorpe
2024-08-14 15:34 ` Peter Xu
2024-08-14 12:41 ` Jason Gunthorpe
2024-08-09 16:08 ` [PATCH 07/19] mm/fork: Accept huge pfnmap entries Peter Xu
2024-08-09 16:32 ` David Hildenbrand
2024-08-09 17:15 ` Peter Xu
2024-08-09 17:59 ` David Hildenbrand
2024-08-12 18:29 ` Peter Xu
2024-08-12 18:50 ` David Hildenbrand
2024-08-12 19:05 ` Peter Xu
2024-08-09 16:08 ` [PATCH 08/19] mm: Always define pxx_pgprot() Peter Xu
2024-08-14 13:09 ` Jason Gunthorpe
2024-08-14 15:43 ` Peter Xu
2024-08-09 16:08 ` [PATCH 09/19] mm: New follow_pfnmap API Peter Xu
2024-08-14 13:19 ` Jason Gunthorpe
2024-08-14 18:24 ` Peter Xu
2024-08-14 22:14 ` Jason Gunthorpe
2024-08-15 15:41 ` Peter Xu
2024-08-15 16:16 ` Jason Gunthorpe
2024-08-15 17:21 ` Peter Xu
2024-08-15 17:24 ` Jason Gunthorpe
2024-08-15 18:52 ` Peter Xu
2024-08-16 23:12 ` Sean Christopherson
2024-08-17 11:05 ` David Hildenbrand
2024-08-21 19:10 ` Peter Xu
2024-08-09 16:09 ` [PATCH 10/19] KVM: Use " Peter Xu
2024-08-09 17:23 ` Axel Rasmussen
2024-08-12 18:58 ` Peter Xu
2024-08-12 22:47 ` Axel Rasmussen
2024-08-12 23:44 ` Sean Christopherson
2024-08-14 13:15 ` Jason Gunthorpe
2024-08-14 14:23 ` Sean Christopherson
2024-08-09 16:09 ` [PATCH 11/19] s390/pci_mmio: " Peter Xu
2024-08-09 16:09 ` [PATCH 12/19] mm/x86/pat: Use the new " Peter Xu
2024-08-09 16:09 ` [PATCH 13/19] vfio: " Peter Xu
2024-08-14 13:20 ` Jason Gunthorpe
2024-08-09 16:09 ` [PATCH 14/19] acrn: " Peter Xu
2024-08-09 16:09 ` [PATCH 15/19] mm/access_process_vm: " Peter Xu
2024-08-09 16:09 ` Peter Xu [this message]
2024-08-09 16:09 ` [PATCH 17/19] mm/x86: Support large pfn mappings Peter Xu
2024-08-09 16:09 ` [PATCH 18/19] mm/arm64: " Peter Xu
2024-08-09 16:09 ` [PATCH 19/19] vfio/pci: Implement huge_fault support Peter Xu
2024-08-14 13:25 ` Jason Gunthorpe
2024-08-14 16:08 ` Alex Williamson
2024-08-14 16:24 ` Jason Gunthorpe
[not found] ` <20240809160909.1023470-7-peterx@redhat.com>
2024-08-09 16:20 ` [PATCH 06/19] mm/pagewalk: Check pfnmap early for folio_walk_start() David Hildenbrand
2024-08-09 16:54 ` Peter Xu
2024-08-09 17:25 ` David Hildenbrand
2024-08-09 21:37 ` Peter Xu
2024-08-14 13:05 ` Jason Gunthorpe
2024-08-16 9:30 ` David Hildenbrand
2024-08-16 14:21 ` Peter Xu
2024-08-16 17:38 ` Jason Gunthorpe
2024-08-21 18:42 ` Peter Xu
2024-08-16 17:56 ` David Hildenbrand
2024-08-19 12:19 ` Jason Gunthorpe
2024-08-19 14:19 ` Sean Christopherson
2024-08-09 18:12 ` [PATCH 00/19] mm: Support huge pfnmaps David Hildenbrand
2024-08-14 12:37 ` Jason Gunthorpe
2024-08-14 14:35 ` Sean Christopherson
2024-08-14 14:42 ` Paolo Bonzini
2024-08-14 14:43 ` Jason Gunthorpe
2024-08-14 20:54 ` Sean Christopherson
2024-08-14 22:00 ` Sean Christopherson
2024-08-14 22:10 ` Jason Gunthorpe
2024-08-14 23:36 ` Oliver Upton
2024-08-14 23:27 ` Oliver Upton
2024-08-14 23:38 ` Oliver Upton
2024-08-15 0:23 ` Sean Christopherson
2024-08-15 19:20 ` Peter Xu
2024-08-16 3:05 ` Kefeng Wang
2024-08-16 14:33 ` Peter Xu
2024-08-19 13:14 ` Kefeng Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240809160909.1023470-17-peterx@redhat.com \
--to=peterx@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=alex.williamson@redhat.com \
--cc=apopple@nvidia.com \
--cc=axelrasmussen@google.com \
--cc=bp@alien8.de \
--cc=catalin.marinas@arm.com \
--cc=dave.hansen@linux.intel.com \
--cc=david@redhat.com \
--cc=gshan@redhat.com \
--cc=jgg@nvidia.com \
--cc=kvm@vger.kernel.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mingo@redhat.com \
--cc=osalvador@suse.de \
--cc=pbonzini@redhat.com \
--cc=seanjc@google.com \
--cc=tglx@linutronix.de \
--cc=will@kernel.org \
--cc=x86@kernel.org \
--cc=yan.y.zhao@intel.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).