* [RFC PATCH 03/18] fs/dax: use ptdesc in dax_pmd_load_hole
[not found] <20240730064712.3714387-1-alexs@kernel.org>
@ 2024-07-30 6:46 ` alexs
2024-07-30 6:47 ` [RFC PATCH 09/18] mm/pgtable: fully use ptdesc in pte_alloc_one series functions alexs
` (2 subsequent siblings)
3 siblings, 0 replies; 6+ messages in thread
From: alexs @ 2024-07-30 6:46 UTC (permalink / raw)
To: Will Deacon, Aneesh Kumar K . V, Nick Piggin, Peter Zijlstra,
Russell King, Catalin Marinas, Brian Cain, WANG Xuerui,
Geert Uytterhoeven, Jonas Bonn, Stefan Kristiansson,
Stafford Horne, Michael Ellerman, Naveen N Rao, Paul Walmsley,
Albert Ou, Thomas Gleixner, Borislav Petkov, Dave Hansen, x86,
H . Peter Anvin, Andy Lutomirski, Bibo Mao, Baolin Wang,
linux-arch, linux-mm, linux-arm-kernel, linux-kernel, linux-csky,
linux-hexagon, loongarch, linux-m68k, linux-openrisc,
linuxppc-dev, linux-riscv, Heiko Carstens, Vasily Gorbik,
Christian Borntraeger, Sven Schnelle, Qi Zheng, Vishal Moola,
Aneesh Kumar K . V, Kemeng Shi, Lance Yang, Peter Xu, Barry Song,
linux-s390
Cc: Guo Ren, Christophe Leroy, Palmer Dabbelt, Mike Rapoport,
Oscar Salvador, Alexandre Ghiti, Jisheng Zhang, Samuel Holland,
Anup Patel, Josh Poimboeuf, Breno Leitao, Alexander Gordeev,
Gerald Schaefer, Hugh Dickins, David Hildenbrand, Ryan Roberts,
Matthew Wilcox, Alex Shi, nvdimm, linux-fsdevel,
Christian Brauner, Alexander Viro, Jan Kara, Dan Williams
From: Alex Shi <alexs@kernel.org>
Since we have ptdesc struct now, better to use replace pgtable_t, aka
'struct page *'.
It's a prepare for return ptdesc pointer in pte_alloc_one series
function.
Signed-off-by: Alex Shi <alexs@kernel.org>
Cc: linux-kernel@vger.kernel.org
Cc: nvdimm@lists.linux.dev
Cc: linux-fsdevel@vger.kernel.org
Cc: Christian Brauner <brauner@kernel.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Jan Kara <jack@suse.cz>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Dan Williams <dan.j.williams@intel.com>
---
fs/dax.c | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/fs/dax.c b/fs/dax.c
index becb4a6920c6..6f7cea248206 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -1206,7 +1206,7 @@ static vm_fault_t dax_pmd_load_hole(struct xa_state *xas, struct vm_fault *vmf,
unsigned long pmd_addr = vmf->address & PMD_MASK;
struct vm_area_struct *vma = vmf->vma;
struct inode *inode = mapping->host;
- pgtable_t pgtable = NULL;
+ struct ptdesc *ptdesc = NULL;
struct folio *zero_folio;
spinlock_t *ptl;
pmd_t pmd_entry;
@@ -1222,8 +1222,8 @@ static vm_fault_t dax_pmd_load_hole(struct xa_state *xas, struct vm_fault *vmf,
DAX_PMD | DAX_ZERO_PAGE);
if (arch_needs_pgtable_deposit()) {
- pgtable = pte_alloc_one(vma->vm_mm);
- if (!pgtable)
+ ptdesc = page_ptdesc(pte_alloc_one(vma->vm_mm));
+ if (!ptdesc)
return VM_FAULT_OOM;
}
@@ -1233,8 +1233,8 @@ static vm_fault_t dax_pmd_load_hole(struct xa_state *xas, struct vm_fault *vmf,
goto fallback;
}
- if (pgtable) {
- pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, pgtable);
+ if (ptdesc) {
+ pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, ptdesc_page(ptdesc));
mm_inc_nr_ptes(vma->vm_mm);
}
pmd_entry = mk_pmd(&zero_folio->page, vmf->vma->vm_page_prot);
@@ -1245,8 +1245,8 @@ static vm_fault_t dax_pmd_load_hole(struct xa_state *xas, struct vm_fault *vmf,
return VM_FAULT_NOPAGE;
fallback:
- if (pgtable)
- pte_free(vma->vm_mm, pgtable);
+ if (ptdesc)
+ pte_free(vma->vm_mm, ptdesc_page(ptdesc));
trace_dax_pmd_load_hole_fallback(inode, vmf, zero_folio, *entry);
return VM_FAULT_FALLBACK;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [RFC PATCH 09/18] mm/pgtable: fully use ptdesc in pte_alloc_one series functions
[not found] <20240730064712.3714387-1-alexs@kernel.org>
2024-07-30 6:46 ` [RFC PATCH 03/18] fs/dax: use ptdesc in dax_pmd_load_hole alexs
@ 2024-07-30 6:47 ` alexs
2024-07-30 6:47 ` [RFC PATCH 10/18] mm/pgtable: pass ptdesc to pte_free() alexs
[not found] ` <20240730072719.3715016-1-alexs@kernel.org>
3 siblings, 0 replies; 6+ messages in thread
From: alexs @ 2024-07-30 6:47 UTC (permalink / raw)
To: Will Deacon, Aneesh Kumar K . V, Nick Piggin, Peter Zijlstra,
Russell King, Catalin Marinas, Brian Cain, WANG Xuerui,
Geert Uytterhoeven, Jonas Bonn, Stefan Kristiansson,
Stafford Horne, Michael Ellerman, Naveen N Rao, Paul Walmsley,
Albert Ou, Thomas Gleixner, Borislav Petkov, Dave Hansen, x86,
H . Peter Anvin, Andy Lutomirski, Bibo Mao, Baolin Wang,
linux-arch, linux-mm, linux-arm-kernel, linux-kernel, linux-csky,
linux-hexagon, loongarch, linux-m68k, linux-openrisc,
linuxppc-dev, linux-riscv, Heiko Carstens, Vasily Gorbik,
Christian Borntraeger, Sven Schnelle, Qi Zheng, Vishal Moola,
Aneesh Kumar K . V, Kemeng Shi, Lance Yang, Peter Xu, Barry Song,
linux-s390
Cc: Guo Ren, Christophe Leroy, Palmer Dabbelt, Mike Rapoport,
Oscar Salvador, Alexandre Ghiti, Jisheng Zhang, Samuel Holland,
Anup Patel, Josh Poimboeuf, Breno Leitao, Alexander Gordeev,
Gerald Schaefer, Hugh Dickins, David Hildenbrand, Ryan Roberts,
Matthew Wilcox, Alex Shi, nvdimm, linux-fsdevel, sparclinux,
Dawei Li, Arnd Bergmann, Christian Brauner, Alexander Viro,
Jan Kara, Dan Williams, Max Filippov, Chris Zankel,
David S . Miller, Naveen N . Rao, Bjorn Helgaas, Sam Ravnborg,
Jason Gunthorpe
From: Alex Shi <alexs@kernel.org>
Replace pgtable_t and struct page by ptdesc in pte_alloc_one series
functions.
Signed-off-by: Alex Shi <alexs@kernel.org>
Cc: linux-mm@kvack.org
Cc: linux-arch@vger.kernel.org
Cc: nvdimm@lists.linux.dev
Cc: linux-fsdevel@vger.kernel.org
Cc: sparclinux@vger.kernel.org
Cc: linuxppc-dev@lists.ozlabs.org
Cc: linux-kernel@vger.kernel.org
Cc: linux-arm-kernel@lists.infradead.org
Cc: Dawei Li <dawei.li@shingroup.cn>
Cc: Vishal Moola <vishal.moola@gmail.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Christian Brauner <brauner@kernel.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Jan Kara <jack@suse.cz>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Chris Zankel <chris@zankel.net>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: x86@kernel.org
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: David S. Miller <davem@davemloft.net>
Cc: Naveen N. Rao <naveen.n.rao@linux.ibm.com>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Breno Leitao <leitao@debian.org>
Cc: Josh Poimboeuf <jpoimboe@kernel.org>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Sam Ravnborg <sam@ravnborg.org>
Cc: Peter Xu <peterx@redhat.com>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Hugh Dickins <hughd@google.com>
---
arch/arm/include/asm/pgalloc.h | 9 ++++-----
arch/powerpc/include/asm/pgalloc.h | 4 ++--
arch/s390/include/asm/pgalloc.h | 2 +-
arch/sparc/include/asm/pgalloc_32.h | 2 +-
arch/sparc/include/asm/pgalloc_64.h | 2 +-
arch/sparc/mm/init_64.c | 2 +-
arch/sparc/mm/srmmu.c | 4 ++--
arch/x86/include/asm/pgalloc.h | 2 +-
arch/x86/mm/pgtable.c | 2 +-
arch/xtensa/include/asm/pgalloc.h | 12 ++++++------
fs/dax.c | 2 +-
include/asm-generic/pgalloc.h | 6 +++---
mm/huge_memory.c | 8 ++++----
mm/memory.c | 8 ++++----
14 files changed, 32 insertions(+), 33 deletions(-)
diff --git a/arch/arm/include/asm/pgalloc.h b/arch/arm/include/asm/pgalloc.h
index a17f01235c29..e8501a6c3336 100644
--- a/arch/arm/include/asm/pgalloc.h
+++ b/arch/arm/include/asm/pgalloc.h
@@ -91,16 +91,15 @@ pte_alloc_one_kernel(struct mm_struct *mm)
#define PGTABLE_HIGHMEM 0
#endif
-static inline pgtable_t
-pte_alloc_one(struct mm_struct *mm)
+static inline struct ptdesc *pte_alloc_one(struct mm_struct *mm)
{
- struct page *pte;
+ struct ptdesc *pte;
pte = __pte_alloc_one(mm, GFP_PGTABLE_USER | PGTABLE_HIGHMEM);
if (!pte)
return NULL;
- if (!PageHighMem(pte))
- clean_pte_table(page_address(pte));
+ if (!PageHighMem(ptdesc_page(pte)))
+ clean_pte_table(ptdesc_address(pte));
return pte;
}
diff --git a/arch/powerpc/include/asm/pgalloc.h b/arch/powerpc/include/asm/pgalloc.h
index 3a971e2a8c73..37512f344b37 100644
--- a/arch/powerpc/include/asm/pgalloc.h
+++ b/arch/powerpc/include/asm/pgalloc.h
@@ -27,9 +27,9 @@ static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm)
return (pte_t *)pte_fragment_alloc(mm, 1);
}
-static inline pgtable_t pte_alloc_one(struct mm_struct *mm)
+static inline struct ptdesc *pte_alloc_one(struct mm_struct *mm)
{
- return (pgtable_t)pte_fragment_alloc(mm, 0);
+ return (struct ptdesc *)pte_fragment_alloc(mm, 0);
}
void pte_frag_destroy(void *pte_frag);
diff --git a/arch/s390/include/asm/pgalloc.h b/arch/s390/include/asm/pgalloc.h
index 7b84ef6dc4b6..771494526f6e 100644
--- a/arch/s390/include/asm/pgalloc.h
+++ b/arch/s390/include/asm/pgalloc.h
@@ -137,7 +137,7 @@ static inline void pmd_populate(struct mm_struct *mm,
* page table entry allocation/free routines.
*/
#define pte_alloc_one_kernel(mm) ((pte_t *)page_table_alloc(mm))
-#define pte_alloc_one(mm) ((pte_t *)page_table_alloc(mm))
+#define pte_alloc_one(mm) ((struct ptdesc *)page_table_alloc(mm))
#define pte_free_kernel(mm, pte) page_table_free(mm, (unsigned long *) pte)
#define pte_free(mm, pte) page_table_free(mm, (unsigned long *) pte)
diff --git a/arch/sparc/include/asm/pgalloc_32.h b/arch/sparc/include/asm/pgalloc_32.h
index 4f73e87b22a3..bc3ef54d9564 100644
--- a/arch/sparc/include/asm/pgalloc_32.h
+++ b/arch/sparc/include/asm/pgalloc_32.h
@@ -55,7 +55,7 @@ static inline void free_pmd_fast(pmd_t * pmd)
void pmd_set(pmd_t *pmdp, pte_t *ptep);
#define pmd_populate_kernel pmd_populate
-pgtable_t pte_alloc_one(struct mm_struct *mm);
+struct ptdesc *pte_alloc_one(struct mm_struct *mm);
static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm)
{
diff --git a/arch/sparc/include/asm/pgalloc_64.h b/arch/sparc/include/asm/pgalloc_64.h
index caa7632be4c2..285aa7958912 100644
--- a/arch/sparc/include/asm/pgalloc_64.h
+++ b/arch/sparc/include/asm/pgalloc_64.h
@@ -61,7 +61,7 @@ static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd)
}
pte_t *pte_alloc_one_kernel(struct mm_struct *mm);
-pgtable_t pte_alloc_one(struct mm_struct *mm);
+struct ptdesc *pte_alloc_one(struct mm_struct *mm);
void pte_free_kernel(struct mm_struct *mm, pte_t *pte);
void pte_free(struct mm_struct *mm, pgtable_t ptepage);
diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
index 53d7cb5bbffe..e1b33f996469 100644
--- a/arch/sparc/mm/init_64.c
+++ b/arch/sparc/mm/init_64.c
@@ -2900,7 +2900,7 @@ pte_t *pte_alloc_one_kernel(struct mm_struct *mm)
return pte;
}
-pgtable_t pte_alloc_one(struct mm_struct *mm)
+struct ptdesc *pte_alloc_one(struct mm_struct *mm)
{
struct ptdesc *ptdesc = pagetable_alloc(GFP_KERNEL | __GFP_ZERO, 0);
diff --git a/arch/sparc/mm/srmmu.c b/arch/sparc/mm/srmmu.c
index 9df51a62333d..60bb8628bb1f 100644
--- a/arch/sparc/mm/srmmu.c
+++ b/arch/sparc/mm/srmmu.c
@@ -346,7 +346,7 @@ pgd_t *get_pgd_fast(void)
* Alignments up to the page size are the same for physical and virtual
* addresses of the nocache area.
*/
-pgtable_t pte_alloc_one(struct mm_struct *mm)
+struct ptdesc *pte_alloc_one(struct mm_struct *mm)
{
pte_t *ptep;
struct page *page;
@@ -362,7 +362,7 @@ pgtable_t pte_alloc_one(struct mm_struct *mm)
}
spin_unlock(&mm->page_table_lock);
- return ptep;
+ return (struct ptdesc *)ptep;
}
void pte_free(struct mm_struct *mm, pgtable_t ptep)
diff --git a/arch/x86/include/asm/pgalloc.h b/arch/x86/include/asm/pgalloc.h
index dcd836b59beb..497c757b5b98 100644
--- a/arch/x86/include/asm/pgalloc.h
+++ b/arch/x86/include/asm/pgalloc.h
@@ -51,7 +51,7 @@ extern gfp_t __userpte_alloc_gfp;
extern pgd_t *pgd_alloc(struct mm_struct *);
extern void pgd_free(struct mm_struct *mm, pgd_t *pgd);
-extern pgtable_t pte_alloc_one(struct mm_struct *);
+extern struct ptdesc *pte_alloc_one(struct mm_struct *);
extern void ___pte_free_tlb(struct mmu_gather *tlb, struct page *pte);
diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
index 93e54ba91fbf..c27d15cd01b9 100644
--- a/arch/x86/mm/pgtable.c
+++ b/arch/x86/mm/pgtable.c
@@ -28,7 +28,7 @@ void paravirt_tlb_remove_table(struct mmu_gather *tlb, void *table)
gfp_t __userpte_alloc_gfp = GFP_PGTABLE_USER | PGTABLE_HIGHMEM;
-pgtable_t pte_alloc_one(struct mm_struct *mm)
+struct ptdesc *pte_alloc_one(struct mm_struct *mm)
{
return __pte_alloc_one(mm, __userpte_alloc_gfp);
}
diff --git a/arch/xtensa/include/asm/pgalloc.h b/arch/xtensa/include/asm/pgalloc.h
index 7fc0f9126dd3..a9206c02956e 100644
--- a/arch/xtensa/include/asm/pgalloc.h
+++ b/arch/xtensa/include/asm/pgalloc.h
@@ -51,15 +51,15 @@ static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm)
return ptep;
}
-static inline pgtable_t pte_alloc_one(struct mm_struct *mm)
+static inline struct ptdesc *pte_alloc_one(struct mm_struct *mm)
{
- struct page *page;
+ struct ptdesc *ptdesc;
- page = __pte_alloc_one(mm, GFP_PGTABLE_USER);
- if (!page)
+ ptdesc = __pte_alloc_one(mm, GFP_PGTABLE_USER);
+ if (!ptdesc)
return NULL;
- ptes_clear(page_address(page));
- return page;
+ ptes_clear(ptdesc_address(ptdesc));
+ return ptdesc;
}
#endif /* CONFIG_MMU */
diff --git a/fs/dax.c b/fs/dax.c
index 6f7cea248206..51cbc08b22e7 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -1222,7 +1222,7 @@ static vm_fault_t dax_pmd_load_hole(struct xa_state *xas, struct vm_fault *vmf,
DAX_PMD | DAX_ZERO_PAGE);
if (arch_needs_pgtable_deposit()) {
- ptdesc = page_ptdesc(pte_alloc_one(vma->vm_mm));
+ ptdesc = pte_alloc_one(vma->vm_mm);
if (!ptdesc)
return VM_FAULT_OOM;
}
diff --git a/include/asm-generic/pgalloc.h b/include/asm-generic/pgalloc.h
index 7c48f5fbf8aa..1a4070f8d5dd 100644
--- a/include/asm-generic/pgalloc.h
+++ b/include/asm-generic/pgalloc.h
@@ -63,7 +63,7 @@ static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
*
* Return: `struct page` referencing the ptdesc or %NULL on error
*/
-static inline pgtable_t __pte_alloc_one_noprof(struct mm_struct *mm, gfp_t gfp)
+static inline struct ptdesc *__pte_alloc_one_noprof(struct mm_struct *mm, gfp_t gfp)
{
struct ptdesc *ptdesc;
@@ -75,7 +75,7 @@ static inline pgtable_t __pte_alloc_one_noprof(struct mm_struct *mm, gfp_t gfp)
return NULL;
}
- return ptdesc_page(ptdesc);
+ return ptdesc;
}
#define __pte_alloc_one(...) alloc_hooks(__pte_alloc_one_noprof(__VA_ARGS__))
@@ -88,7 +88,7 @@ static inline pgtable_t __pte_alloc_one_noprof(struct mm_struct *mm, gfp_t gfp)
*
* Return: `struct page` referencing the ptdesc or %NULL on error
*/
-static inline pgtable_t pte_alloc_one_noprof(struct mm_struct *mm)
+static inline struct ptdesc *pte_alloc_one_noprof(struct mm_struct *mm)
{
return __pte_alloc_one_noprof(mm, GFP_PGTABLE_USER);
}
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 236e1582d97e..6274eb7559ac 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -959,7 +959,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf,
}
folio_throttle_swaprate(folio, gfp);
- ptdesc = page_ptdesc(pte_alloc_one(vma->vm_mm));
+ ptdesc = pte_alloc_one(vma->vm_mm);
if (unlikely(!ptdesc)) {
ret = VM_FAULT_OOM;
goto release;
@@ -1091,7 +1091,7 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf)
struct folio *zero_folio;
vm_fault_t ret;
- ptdesc = page_ptdesc(pte_alloc_one(vma->vm_mm));
+ ptdesc = pte_alloc_one(vma->vm_mm);
if (unlikely(!ptdesc))
return VM_FAULT_OOM;
zero_folio = mm_get_huge_zero_folio(vma->vm_mm);
@@ -1213,7 +1213,7 @@ vm_fault_t vmf_insert_pfn_pmd(struct vm_fault *vmf, pfn_t pfn, bool write)
return VM_FAULT_SIGBUS;
if (arch_needs_pgtable_deposit()) {
- ptdesc = page_ptdesc(pte_alloc_one(vma->vm_mm));
+ ptdesc = pte_alloc_one(vma->vm_mm);
if (!ptdesc)
return VM_FAULT_OOM;
}
@@ -1376,7 +1376,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
if (!vma_is_anonymous(dst_vma))
return 0;
- ptdesc = page_ptdesc(pte_alloc_one(dst_mm));
+ ptdesc = pte_alloc_one(dst_mm);
if (unlikely(!ptdesc))
goto out;
diff --git a/mm/memory.c b/mm/memory.c
index 5b01d94a0b5f..37529e0a9ce2 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -445,7 +445,7 @@ void pmd_install(struct mm_struct *mm, pmd_t *pmd, pgtable_t *pte)
int __pte_alloc(struct mm_struct *mm, pmd_t *pmd)
{
- struct ptdesc *ptdesc = page_ptdesc(pte_alloc_one(mm));
+ struct ptdesc *ptdesc = pte_alloc_one(mm);
if (!ptdesc)
return -ENOMEM;
@@ -4647,7 +4647,7 @@ static vm_fault_t __do_fault(struct vm_fault *vmf)
* # flush A, B to clear the writeback
*/
if (pmd_none(*vmf->pmd) && !vmf->prealloc_pte) {
- vmf->prealloc_pte = pte_alloc_one(vma->vm_mm);
+ vmf->prealloc_pte = ptdesc_page(pte_alloc_one(vma->vm_mm));
if (!vmf->prealloc_pte)
return VM_FAULT_OOM;
}
@@ -4725,7 +4725,7 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page)
* related to pte entry. Use the preallocated table for that.
*/
if (arch_needs_pgtable_deposit() && !vmf->prealloc_pte) {
- vmf->prealloc_pte = pte_alloc_one(vma->vm_mm);
+ vmf->prealloc_pte = ptdesc_page(pte_alloc_one(vma->vm_mm));
if (!vmf->prealloc_pte)
return VM_FAULT_OOM;
}
@@ -5010,7 +5010,7 @@ static vm_fault_t do_fault_around(struct vm_fault *vmf)
pte_off + vma_pages(vmf->vma) - vma_off) - 1;
if (pmd_none(*vmf->pmd)) {
- vmf->prealloc_pte = pte_alloc_one(vmf->vma->vm_mm);
+ vmf->prealloc_pte = ptdesc_page(pte_alloc_one(vmf->vma->vm_mm));
if (!vmf->prealloc_pte)
return VM_FAULT_OOM;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [RFC PATCH 10/18] mm/pgtable: pass ptdesc to pte_free()
[not found] <20240730064712.3714387-1-alexs@kernel.org>
2024-07-30 6:46 ` [RFC PATCH 03/18] fs/dax: use ptdesc in dax_pmd_load_hole alexs
2024-07-30 6:47 ` [RFC PATCH 09/18] mm/pgtable: fully use ptdesc in pte_alloc_one series functions alexs
@ 2024-07-30 6:47 ` alexs
[not found] ` <20240730072719.3715016-1-alexs@kernel.org>
3 siblings, 0 replies; 6+ messages in thread
From: alexs @ 2024-07-30 6:47 UTC (permalink / raw)
To: Will Deacon, Aneesh Kumar K . V, Nick Piggin, Peter Zijlstra,
Russell King, Catalin Marinas, Brian Cain, WANG Xuerui,
Geert Uytterhoeven, Jonas Bonn, Stefan Kristiansson,
Stafford Horne, Michael Ellerman, Naveen N Rao, Paul Walmsley,
Albert Ou, Thomas Gleixner, Borislav Petkov, Dave Hansen, x86,
H . Peter Anvin, Andy Lutomirski, Bibo Mao, Baolin Wang,
linux-arch, linux-mm, linux-arm-kernel, linux-kernel, linux-csky,
linux-hexagon, loongarch, linux-m68k, linux-openrisc,
linuxppc-dev, linux-riscv, Heiko Carstens, Vasily Gorbik,
Christian Borntraeger, Sven Schnelle, Qi Zheng, Vishal Moola,
Aneesh Kumar K . V, Kemeng Shi, Lance Yang, Peter Xu, Barry Song,
linux-s390
Cc: Guo Ren, Christophe Leroy, Palmer Dabbelt, Mike Rapoport,
Oscar Salvador, Alexandre Ghiti, Jisheng Zhang, Samuel Holland,
Anup Patel, Josh Poimboeuf, Breno Leitao, Alexander Gordeev,
Gerald Schaefer, Hugh Dickins, David Hildenbrand, Ryan Roberts,
Matthew Wilcox, Alex Shi, nvdimm, linux-fsdevel, sparclinux,
Bjorn Helgaas, Arnd Bergmann, Christian Brauner, Alexander Viro,
Jan Kara, Dan Williams, David S . Miller, Naveen N . Rao,
Dawei Li
From: Alex Shi <alexs@kernel.org>
Now we could remove couple of page<->ptdesc converters now.
Signed-off-by: Alex Shi <alexs@kernel.org>
Cc: linux-mm@kvack.org
Cc: linux-arch@vger.kernel.org
Cc: nvdimm@lists.linux.dev
Cc: linux-fsdevel@vger.kernel.org
Cc: sparclinux@vger.kernel.org
Cc: linuxppc-dev@lists.ozlabs.org
Cc: linux-m68k@lists.linux-m68k.org
Cc: linux-kernel@vger.kernel.org
Cc: linux-arm-kernel@lists.infradead.org
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Vishal Moola <vishal.moola@gmail.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Christian Brauner <brauner@kernel.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Jan Kara <jack@suse.cz>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Naveen N. Rao <naveen.n.rao@linux.ibm.com>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Dawei Li <dawei.li@shingroup.cn>
Cc: Hugh Dickins <hughd@google.com>
---
arch/arm/mm/pgd.c | 2 +-
arch/m68k/include/asm/motorola_pgalloc.h | 4 ++--
arch/powerpc/include/asm/book3s/64/pgalloc.h | 2 +-
arch/powerpc/include/asm/pgalloc.h | 2 +-
arch/sparc/include/asm/pgalloc_32.h | 2 +-
arch/sparc/mm/srmmu.c | 2 +-
fs/dax.c | 2 +-
include/asm-generic/pgalloc.h | 4 +---
mm/debug_vm_pgtable.c | 2 +-
mm/huge_memory.c | 20 ++++++++++----------
mm/memory.c | 4 ++--
mm/pgtable-generic.c | 2 +-
12 files changed, 23 insertions(+), 25 deletions(-)
diff --git a/arch/arm/mm/pgd.c b/arch/arm/mm/pgd.c
index f8e9bc58a84f..c384b734d752 100644
--- a/arch/arm/mm/pgd.c
+++ b/arch/arm/mm/pgd.c
@@ -168,7 +168,7 @@ void pgd_free(struct mm_struct *mm, pgd_t *pgd_base)
pte = pmd_pgtable(*pmd);
pmd_clear(pmd);
- pte_free(mm, pte);
+ pte_free(mm, page_ptdesc(pte));
mm_dec_nr_ptes(mm);
no_pmd:
pud_clear(pud);
diff --git a/arch/m68k/include/asm/motorola_pgalloc.h b/arch/m68k/include/asm/motorola_pgalloc.h
index 74a817d9387f..f6bb375971dc 100644
--- a/arch/m68k/include/asm/motorola_pgalloc.h
+++ b/arch/m68k/include/asm/motorola_pgalloc.h
@@ -39,9 +39,9 @@ static inline pgtable_t pte_alloc_one(struct mm_struct *mm)
return get_pointer_table(TABLE_PTE);
}
-static inline void pte_free(struct mm_struct *mm, pgtable_t pgtable)
+static inline void pte_free(struct mm_struct *mm, struct ptdesc *ptdesc)
{
- free_pointer_table(pgtable, TABLE_PTE);
+ free_pointer_table(ptdesc_page(ptdesc), TABLE_PTE);
}
static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pgtable,
diff --git a/arch/powerpc/include/asm/book3s/64/pgalloc.h b/arch/powerpc/include/asm/book3s/64/pgalloc.h
index dd2cff53a111..eb7d2ca59f62 100644
--- a/arch/powerpc/include/asm/book3s/64/pgalloc.h
+++ b/arch/powerpc/include/asm/book3s/64/pgalloc.h
@@ -162,7 +162,7 @@ static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd,
}
static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd,
- pgtable_t pte_page)
+ struct ptdesc *pte_page)
{
*pmd = __pmd(__pgtable_ptr_val(pte_page) | PMD_VAL_BITS);
}
diff --git a/arch/powerpc/include/asm/pgalloc.h b/arch/powerpc/include/asm/pgalloc.h
index 37512f344b37..12520521163e 100644
--- a/arch/powerpc/include/asm/pgalloc.h
+++ b/arch/powerpc/include/asm/pgalloc.h
@@ -40,7 +40,7 @@ static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
pte_fragment_free((unsigned long *)pte, 1);
}
-static inline void pte_free(struct mm_struct *mm, pgtable_t ptepage)
+static inline void pte_free(struct mm_struct *mm, struct ptdesc *ptepage)
{
pte_fragment_free((unsigned long *)ptepage, 0);
}
diff --git a/arch/sparc/include/asm/pgalloc_32.h b/arch/sparc/include/asm/pgalloc_32.h
index bc3ef54d9564..addaade56f21 100644
--- a/arch/sparc/include/asm/pgalloc_32.h
+++ b/arch/sparc/include/asm/pgalloc_32.h
@@ -71,7 +71,7 @@ static inline void free_pte_fast(pte_t *pte)
#define pte_free_kernel(mm, pte) free_pte_fast(pte)
-void pte_free(struct mm_struct * mm, pgtable_t pte);
+void pte_free(struct mm_struct *mm, struct ptdesc *pte);
#define __pte_free_tlb(tlb, pte, addr) pte_free((tlb)->mm, pte)
#endif /* _SPARC_PGALLOC_H */
diff --git a/arch/sparc/mm/srmmu.c b/arch/sparc/mm/srmmu.c
index 60bb8628bb1f..05be7d86eda3 100644
--- a/arch/sparc/mm/srmmu.c
+++ b/arch/sparc/mm/srmmu.c
@@ -365,7 +365,7 @@ struct ptdesc *pte_alloc_one(struct mm_struct *mm)
return (struct ptdesc *)ptep;
}
-void pte_free(struct mm_struct *mm, pgtable_t ptep)
+void pte_free(struct mm_struct *mm, struct ptdesc *ptep)
{
struct page *page;
diff --git a/fs/dax.c b/fs/dax.c
index 51cbc08b22e7..61b9bd5200da 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -1246,7 +1246,7 @@ static vm_fault_t dax_pmd_load_hole(struct xa_state *xas, struct vm_fault *vmf,
fallback:
if (ptdesc)
- pte_free(vma->vm_mm, ptdesc_page(ptdesc));
+ pte_free(vma->vm_mm, ptdesc);
trace_dax_pmd_load_hole_fallback(inode, vmf, zero_folio, *entry);
return VM_FAULT_FALLBACK;
}
diff --git a/include/asm-generic/pgalloc.h b/include/asm-generic/pgalloc.h
index 1a4070f8d5dd..5f249ec9d289 100644
--- a/include/asm-generic/pgalloc.h
+++ b/include/asm-generic/pgalloc.h
@@ -105,10 +105,8 @@ static inline struct ptdesc *pte_alloc_one_noprof(struct mm_struct *mm)
* @mm: the mm_struct of the current context
* @pte_page: the `struct page` referencing the ptdesc
*/
-static inline void pte_free(struct mm_struct *mm, struct page *pte_page)
+static inline void pte_free(struct mm_struct *mm, struct ptdesc *ptdesc)
{
- struct ptdesc *ptdesc = page_ptdesc(pte_page);
-
pagetable_pte_dtor(ptdesc);
pagetable_free(ptdesc);
}
diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index e4969fb54da3..f256bc816744 100644
--- a/mm/debug_vm_pgtable.c
+++ b/mm/debug_vm_pgtable.c
@@ -1049,7 +1049,7 @@ static void __init destroy_args(struct pgtable_debug_args *args)
/* Free page table entries */
if (args->start_ptep) {
- pte_free(args->mm, args->start_ptep);
+ pte_free(args->mm, page_ptdesc(args->start_ptep));
mm_dec_nr_ptes(args->mm);
}
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 6274eb7559ac..dc323453fa02 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -987,7 +987,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf,
if (userfaultfd_missing(vma)) {
spin_unlock(vmf->ptl);
folio_put(folio);
- pte_free(vma->vm_mm, ptdesc_page(ptdesc));
+ pte_free(vma->vm_mm, ptdesc);
ret = handle_userfault(vmf, VM_UFFD_MISSING);
VM_BUG_ON(ret & VM_FAULT_FALLBACK);
return ret;
@@ -1013,7 +1013,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf,
spin_unlock(vmf->ptl);
release:
if (ptdesc)
- pte_free(vma->vm_mm, ptdesc_page(ptdesc));
+ pte_free(vma->vm_mm, ptdesc);
folio_put(folio);
return ret;
@@ -1096,7 +1096,7 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf)
return VM_FAULT_OOM;
zero_folio = mm_get_huge_zero_folio(vma->vm_mm);
if (unlikely(!zero_folio)) {
- pte_free(vma->vm_mm, ptdesc_page(ptdesc));
+ pte_free(vma->vm_mm, ptdesc);
count_vm_event(THP_FAULT_FALLBACK);
return VM_FAULT_FALLBACK;
}
@@ -1106,10 +1106,10 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf)
ret = check_stable_address_space(vma->vm_mm);
if (ret) {
spin_unlock(vmf->ptl);
- pte_free(vma->vm_mm, ptdesc_page(ptdesc));
+ pte_free(vma->vm_mm, ptdesc);
} else if (userfaultfd_missing(vma)) {
spin_unlock(vmf->ptl);
- pte_free(vma->vm_mm, ptdesc_page(ptdesc));
+ pte_free(vma->vm_mm, ptdesc);
ret = handle_userfault(vmf, VM_UFFD_MISSING);
VM_BUG_ON(ret & VM_FAULT_FALLBACK);
} else {
@@ -1120,7 +1120,7 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf)
}
} else {
spin_unlock(vmf->ptl);
- pte_free(vma->vm_mm, ptdesc_page(ptdesc));
+ pte_free(vma->vm_mm, ptdesc);
}
return ret;
}
@@ -1178,7 +1178,7 @@ static void insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr,
out_unlock:
spin_unlock(ptl);
if (ptdesc)
- pte_free(mm, ptdesc_page(ptdesc));
+ pte_free(mm, ptdesc);
}
/**
@@ -1414,7 +1414,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
#endif
if (unlikely(!pmd_trans_huge(pmd))) {
- pte_free(dst_mm, ptdesc_page(ptdesc));
+ pte_free(dst_mm, ptdesc);
goto out_unlock;
}
/*
@@ -1440,7 +1440,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
if (unlikely(folio_try_dup_anon_rmap_pmd(src_folio, src_page, src_vma))) {
/* Page maybe pinned: split and retry the fault on PTEs. */
folio_put(src_folio);
- pte_free(dst_mm, ptdesc_page(ptdesc));
+ pte_free(dst_mm, ptdesc);
spin_unlock(src_ptl);
spin_unlock(dst_ptl);
__split_huge_pmd(src_vma, src_pmd, addr, false, NULL);
@@ -1830,7 +1830,7 @@ static inline void zap_deposited_table(struct mm_struct *mm, pmd_t *pmd)
pgtable_t pgtable;
pgtable = pgtable_trans_huge_withdraw(mm, pmd);
- pte_free(mm, pgtable);
+ pte_free(mm, page_ptdesc(pgtable));
mm_dec_nr_ptes(mm);
}
diff --git a/mm/memory.c b/mm/memory.c
index 37529e0a9ce2..3014168e7296 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -451,7 +451,7 @@ int __pte_alloc(struct mm_struct *mm, pmd_t *pmd)
pmd_install(mm, pmd, (pgtable_t *)&ptdesc);
if (ptdesc)
- pte_free(mm, ptdesc_page(ptdesc));
+ pte_free(mm, ptdesc);
return 0;
}
@@ -5196,7 +5196,7 @@ static vm_fault_t do_fault(struct vm_fault *vmf)
/* preallocated pagetable is unused: free it */
if (vmf->prealloc_pte) {
- pte_free(vm_mm, vmf->prealloc_pte);
+ pte_free(vm_mm, page_ptdesc(vmf->prealloc_pte));
vmf->prealloc_pte = NULL;
}
return ret;
diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c
index f34a8d115f5b..92245a32656b 100644
--- a/mm/pgtable-generic.c
+++ b/mm/pgtable-generic.c
@@ -241,7 +241,7 @@ static void pte_free_now(struct rcu_head *head)
struct ptdesc *ptdesc;
ptdesc = container_of(head, struct ptdesc, pt_rcu_head);
- pte_free(NULL /* mm not passed and not used */, (pgtable_t)ptdesc);
+ pte_free(NULL /* mm not passed and not used */, ptdesc);
}
void pte_free_defer(struct mm_struct *mm, pgtable_t pgtable)
--
2.43.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [RFC PATCH 14/18] mm/pgtable: use ptdesc in pgtable_trans_huge_deposit
[not found] ` <20240730072719.3715016-1-alexs@kernel.org>
@ 2024-07-30 7:27 ` alexs
2024-07-30 7:27 ` [RFC PATCH 16/18] mm/pgtable: pass ptdesc to pmd_install alexs
2024-07-30 7:27 ` [RFC PATCH 17/18] mm: convert vmf.prealloc_pte to struct ptdesc pointer alexs
2 siblings, 0 replies; 6+ messages in thread
From: alexs @ 2024-07-30 7:27 UTC (permalink / raw)
To: Will Deacon, Aneesh Kumar K . V, Nick Piggin, Peter Zijlstra,
Russell King, Catalin Marinas, Brian Cain, WANG Xuerui,
Geert Uytterhoeven, Jonas Bonn, Stefan Kristiansson,
Stafford Horne, Michael Ellerman, Naveen N Rao, Paul Walmsley,
Albert Ou, Thomas Gleixner, Borislav Petkov, Dave Hansen, x86,
H . Peter Anvin, Andy Lutomirski, Bibo Mao, Baolin Wang,
linux-arch, linux-mm, linux-arm-kernel, linux-kernel, linux-csky,
linux-hexagon, loongarch, linux-m68k, linux-openrisc,
linuxppc-dev, linux-riscv, Heiko Carstens, Vasily Gorbik,
Christian Borntraeger, Sven Schnelle, Qi Zheng, Vishal Moola,
Aneesh Kumar K . V, Kemeng Shi, Lance Yang, Peter Xu, Barry Song,
linux-s390
Cc: Guo Ren, Christophe Leroy, Palmer Dabbelt, Mike Rapoport,
Oscar Salvador, Alexandre Ghiti, Jisheng Zhang, Samuel Holland,
Anup Patel, Josh Poimboeuf, Breno Leitao, Alexander Gordeev,
Gerald Schaefer, Hugh Dickins, David Hildenbrand, Ryan Roberts,
Matthew Wilcox, Alex Shi, Naveen N . Rao, nvdimm, linux-fsdevel,
sparclinux, Kinsey Ho, Ingo Molnar, Christian Brauner,
Alexander Viro, Jan Kara, Dan Williams, Andreas Larsson,
David S . Miller, Jason Gunthorpe
From: Alex Shi <alexs@kernel.org>
A step to replace pgtable_t to struct ptdesc.
Signed-off-by: Alex Shi <alexs@kernel.org>
Cc: linux-mm@kvack.org
Cc: nvdimm@lists.linux.dev
Cc: linux-fsdevel@vger.kernel.org
Cc: sparclinux@vger.kernel.org
Cc: linux-s390@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: linuxppc-dev@lists.ozlabs.org
Cc: Barry Song <baohua@kernel.org>
Cc: Lance Yang <ioworker0@gmail.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Kinsey Ho <kinseyho@google.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Aneesh Kumar K.V <aneesh.kumar@kernel.org>
Cc: Christian Brauner <brauner@kernel.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Jan Kara <jack@suse.cz>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Andreas Larsson <andreas@gaisler.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Sven Schnelle <svens@linux.ibm.com>
Cc: Christian Borntraeger <borntraeger@linux.ibm.com>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Cc: Alexander Gordeev <agordeev@linux.ibm.com>
Cc: Naveen N. Rao <naveen.n.rao@linux.ibm.com>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Peter Xu <peterx@redhat.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Michael Ellerman <mpe@ellerman.id.au>
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 6 +++---
arch/powerpc/mm/book3s64/hash_pgtable.c | 6 +++---
arch/powerpc/mm/book3s64/radix_pgtable.c | 6 +++---
arch/s390/include/asm/pgtable.h | 2 +-
arch/s390/mm/pgtable.c | 6 +++---
arch/sparc/include/asm/pgtable_64.h | 2 +-
arch/sparc/mm/tlb.c | 6 +++---
fs/dax.c | 2 +-
include/linux/pgtable.h | 2 +-
mm/debug_vm_pgtable.c | 2 +-
mm/huge_memory.c | 14 +++++++-------
mm/khugepaged.c | 2 +-
mm/memory.c | 2 +-
mm/pgtable-generic.c | 8 ++++----
14 files changed, 33 insertions(+), 33 deletions(-)
diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
index 0ee440b819d7..cf44e2440825 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -1365,11 +1365,11 @@ pud_t pudp_huge_get_and_clear_full(struct vm_area_struct *vma,
#define __HAVE_ARCH_PGTABLE_DEPOSIT
static inline void pgtable_trans_huge_deposit(struct mm_struct *mm,
- pmd_t *pmdp, pgtable_t pgtable)
+ pmd_t *pmdp, struct ptdesc *ptdesc)
{
if (radix_enabled())
- return radix__pgtable_trans_huge_deposit(mm, pmdp, pgtable);
- return hash__pgtable_trans_huge_deposit(mm, pmdp, pgtable);
+ return radix__pgtable_trans_huge_deposit(mm, pmdp, ptdesc);
+ return hash__pgtable_trans_huge_deposit(mm, pmdp, ptdesc);
}
#define __HAVE_ARCH_PGTABLE_WITHDRAW
diff --git a/arch/powerpc/mm/book3s64/hash_pgtable.c b/arch/powerpc/mm/book3s64/hash_pgtable.c
index 35562d1f4267..8fd2c833dc3d 100644
--- a/arch/powerpc/mm/book3s64/hash_pgtable.c
+++ b/arch/powerpc/mm/book3s64/hash_pgtable.c
@@ -265,16 +265,16 @@ pmd_t hash__pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long addres
* the base page size hptes
*/
void hash__pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
- pgtable_t pgtable)
+ struct ptdesc *ptdesc)
{
- pgtable_t *pgtable_slot;
+ pte_t **pgtable_slot;
assert_spin_locked(pmd_lockptr(mm, pmdp));
/*
* we store the pgtable in the second half of PMD
*/
pgtable_slot = (pgtable_t *)pmdp + PTRS_PER_PMD;
- *pgtable_slot = pgtable;
+ *pgtable_slot = (pte_t)ptdesc;
/*
* expose the deposited pgtable to other cpus.
* before we set the hugepage PTE at pmd level
diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
index 3b9bb19510e3..c33e860966ad 100644
--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
+++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
@@ -1478,9 +1478,9 @@ pmd_t radix__pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long addre
* list_head memory area.
*/
void radix__pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
- pgtable_t pgtable)
+ struct ptdesc *ptdesc)
{
- struct list_head *lh = (struct list_head *) pgtable;
+ struct list_head *lh = (struct list_head *)ptdesc;
assert_spin_locked(pmd_lockptr(mm, pmdp));
@@ -1489,7 +1489,7 @@ void radix__pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
INIT_LIST_HEAD(lh);
else
list_add(lh, (struct list_head *) pmd_huge_pte(mm, pmdp));
- pmd_huge_pte(mm, pmdp) = pgtable;
+ pmd_huge_pte(mm, pmdp) = ptdesc_page(ptdesc);
}
struct ptdesc *radix__pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp)
diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
index cf0baf4bfe5c..d7b635f5e1e7 100644
--- a/arch/s390/include/asm/pgtable.h
+++ b/arch/s390/include/asm/pgtable.h
@@ -1735,7 +1735,7 @@ pud_t pudp_xchg_direct(struct mm_struct *, unsigned long, pud_t *, pud_t);
#define __HAVE_ARCH_PGTABLE_DEPOSIT
void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
- pgtable_t pgtable);
+ struct ptdesc *ptdesc);
#define __HAVE_ARCH_PGTABLE_WITHDRAW
struct ptdesc *pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp);
diff --git a/arch/s390/mm/pgtable.c b/arch/s390/mm/pgtable.c
index b9016ee145cb..cf1a6aeb66d4 100644
--- a/arch/s390/mm/pgtable.c
+++ b/arch/s390/mm/pgtable.c
@@ -563,9 +563,9 @@ EXPORT_SYMBOL(pudp_xchg_direct);
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
- pgtable_t pgtable)
+ struct ptdesc *ptdesc)
{
- struct list_head *lh = (struct list_head *) pgtable;
+ struct list_head *lh = (struct list_head *)ptdesc;
assert_spin_locked(pmd_lockptr(mm, pmdp));
@@ -574,7 +574,7 @@ void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
INIT_LIST_HEAD(lh);
else
list_add(lh, (struct list_head *) pmd_huge_pte(mm, pmdp));
- pmd_huge_pte(mm, pmdp) = (struct ptdesc *)pgtable;
+ pmd_huge_pte(mm, pmdp) = ptdesc;
}
struct ptdesc *pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp)
diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h
index bfefd678e220..c71be5ef8b06 100644
--- a/arch/sparc/include/asm/pgtable_64.h
+++ b/arch/sparc/include/asm/pgtable_64.h
@@ -995,7 +995,7 @@ extern pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
#define __HAVE_ARCH_PGTABLE_DEPOSIT
void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
- pgtable_t pgtable);
+ struct ptdesc *ptdesc);
#define __HAVE_ARCH_PGTABLE_WITHDRAW
struct ptdesc *pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp);
diff --git a/arch/sparc/mm/tlb.c b/arch/sparc/mm/tlb.c
index bd2d3b1f6ba3..eeed4427f524 100644
--- a/arch/sparc/mm/tlb.c
+++ b/arch/sparc/mm/tlb.c
@@ -267,9 +267,9 @@ pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
}
void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
- pgtable_t pgtable)
+ struct ptdesc *ptdesc)
{
- struct list_head *lh = (struct list_head *) pgtable;
+ struct list_head *lh = (struct list_head *)ptdesc;
assert_spin_locked(&mm->page_table_lock);
@@ -278,7 +278,7 @@ void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
INIT_LIST_HEAD(lh);
else
list_add(lh, (struct list_head *) pmd_huge_pte(mm, pmdp));
- pmd_huge_pte(mm, pmdp) = (struct ptdesc *)pgtable;
+ pmd_huge_pte(mm, pmdp) = ptdesc;
}
struct ptdesc *pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp)
diff --git a/fs/dax.c b/fs/dax.c
index 61b9bd5200da..4b4e6acb0efc 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -1234,7 +1234,7 @@ static vm_fault_t dax_pmd_load_hole(struct xa_state *xas, struct vm_fault *vmf,
}
if (ptdesc) {
- pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, ptdesc_page(ptdesc));
+ pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, ptdesc);
mm_inc_nr_ptes(vma->vm_mm);
}
pmd_entry = mk_pmd(&zero_folio->page, vmf->vma->vm_page_prot);
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 3fa7b93580a3..9d256c548f5e 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -925,7 +925,7 @@ static inline pmd_t pmdp_collapse_flush(struct vm_area_struct *vma,
#ifndef __HAVE_ARCH_PGTABLE_DEPOSIT
extern void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
- pgtable_t pgtable);
+ struct ptdesc *ptdesc);
#endif
#ifndef __HAVE_ARCH_PGTABLE_WITHDRAW
diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index f256bc816744..8550eec32aba 100644
--- a/mm/debug_vm_pgtable.c
+++ b/mm/debug_vm_pgtable.c
@@ -225,7 +225,7 @@ static void __init pmd_advanced_tests(struct pgtable_debug_args *args)
/* Align the address wrt HPAGE_PMD_SIZE */
vaddr &= HPAGE_PMD_MASK;
- pgtable_trans_huge_deposit(args->mm, args->pmdp, args->start_ptep);
+ pgtable_trans_huge_deposit(args->mm, args->pmdp, page_ptdesc(args->start_ptep));
pmd = pfn_pmd(args->pmd_pfn, args->page_prot);
set_pmd_at(args->mm, vaddr, args->pmdp, pmd);
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 4dc36910c8aa..aac67e8a8cc8 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -997,7 +997,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf,
entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
folio_add_new_anon_rmap(folio, vma, haddr, RMAP_EXCLUSIVE);
folio_add_lru_vma(folio, vma);
- pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, ptdesc_page(ptdesc));
+ pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, ptdesc);
set_pmd_at(vma->vm_mm, haddr, vmf->pmd, entry);
update_mmu_cache_pmd(vma, vmf->address, vmf->pmd);
add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR);
@@ -1064,7 +1064,7 @@ static void set_huge_zero_folio(struct ptdesc *ptdesc, struct mm_struct *mm,
return;
entry = mk_pmd(&zero_folio->page, vma->vm_page_prot);
entry = pmd_mkhuge(entry);
- pgtable_trans_huge_deposit(mm, pmd, ptdesc_page(ptdesc));
+ pgtable_trans_huge_deposit(mm, pmd, ptdesc);
set_pmd_at(mm, haddr, pmd, entry);
mm_inc_nr_ptes(mm);
}
@@ -1167,7 +1167,7 @@ static void insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr,
}
if (ptdesc) {
- pgtable_trans_huge_deposit(mm, pmd, ptdesc_page(ptdesc));
+ pgtable_trans_huge_deposit(mm, pmd, ptdesc);
mm_inc_nr_ptes(mm);
ptdesc = NULL;
}
@@ -1404,7 +1404,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
}
add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR);
mm_inc_nr_ptes(dst_mm);
- pgtable_trans_huge_deposit(dst_mm, dst_pmd, ptdesc_page(ptdesc));
+ pgtable_trans_huge_deposit(dst_mm, dst_pmd, ptdesc);
if (!userfaultfd_wp(dst_vma))
pmd = pmd_swp_clear_uffd_wp(pmd);
set_pmd_at(dst_mm, addr, dst_pmd, pmd);
@@ -1449,7 +1449,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR);
out_zero_page:
mm_inc_nr_ptes(dst_mm);
- pgtable_trans_huge_deposit(dst_mm, dst_pmd, ptdesc_page(ptdesc));
+ pgtable_trans_huge_deposit(dst_mm, dst_pmd, ptdesc);
pmdp_set_wrprotect(src_mm, addr, src_pmd);
if (!userfaultfd_wp(dst_vma))
pmd = pmd_clear_uffd_wp(pmd);
@@ -1962,7 +1962,7 @@ bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr,
struct ptdesc *ptdesc;
ptdesc = pgtable_trans_huge_withdraw(mm, old_pmd);
- pgtable_trans_huge_deposit(mm, new_pmd, ptdesc_page(ptdesc));
+ pgtable_trans_huge_deposit(mm, new_pmd, ptdesc);
}
pmd = move_soft_dirty_pmd(pmd);
set_pmd_at(mm, new_addr, new_pmd, pmd);
@@ -2236,7 +2236,7 @@ int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, pm
set_pmd_at(mm, dst_addr, dst_pmd, _dst_pmd);
src_ptdesc = pgtable_trans_huge_withdraw(mm, src_pmd);
- pgtable_trans_huge_deposit(mm, dst_pmd, ptdesc_page(src_ptdesc));
+ pgtable_trans_huge_deposit(mm, dst_pmd, src_ptdesc);
unlock_ptls:
double_pt_unlock(src_ptl, dst_ptl);
if (src_anon_vma) {
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index f3b3db104615..48a54269472e 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1232,7 +1232,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
BUG_ON(!pmd_none(*pmd));
folio_add_new_anon_rmap(folio, vma, address, RMAP_EXCLUSIVE);
folio_add_lru_vma(folio, vma);
- pgtable_trans_huge_deposit(mm, pmd, pgtable);
+ pgtable_trans_huge_deposit(mm, pmd, page_ptdesc(pgtable));
set_pmd_at(mm, address, pmd, _pmd);
update_mmu_cache_pmd(vma, address, pmd);
spin_unlock(pmd_ptl);
diff --git a/mm/memory.c b/mm/memory.c
index 27c2f63b7487..956cfe5f644d 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4687,7 +4687,7 @@ static void deposit_prealloc_pte(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
- pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, vmf->prealloc_pte);
+ pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, page_ptdesc(vmf->prealloc_pte));
/*
* We are going to consume the prealloc table,
* count that as nr_ptes.
diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c
index de1ed30fea16..5e763682941d 100644
--- a/mm/pgtable-generic.c
+++ b/mm/pgtable-generic.c
@@ -163,16 +163,16 @@ pud_t pudp_huge_clear_flush(struct vm_area_struct *vma, unsigned long address,
#ifndef __HAVE_ARCH_PGTABLE_DEPOSIT
void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
- pgtable_t pgtable)
+ struct ptdesc *ptdesc)
{
assert_spin_locked(pmd_lockptr(mm, pmdp));
/* FIFO */
if (!pmd_huge_pte(mm, pmdp))
- INIT_LIST_HEAD(&pgtable->lru);
+ INIT_LIST_HEAD(&ptdesc->pt_list);
else
- list_add(&pgtable->lru, &pmd_huge_pte(mm, pmdp)->pt_list);
- pmd_huge_pte(mm, pmdp) = page_ptdesc(pgtable);
+ list_add(&ptdesc->pt_list, &pmd_huge_pte(mm, pmdp)->pt_list);
+ pmd_huge_pte(mm, pmdp) = ptdesc;
}
#endif
--
2.43.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [RFC PATCH 16/18] mm/pgtable: pass ptdesc to pmd_install
[not found] ` <20240730072719.3715016-1-alexs@kernel.org>
2024-07-30 7:27 ` [RFC PATCH 14/18] mm/pgtable: use ptdesc in pgtable_trans_huge_deposit alexs
@ 2024-07-30 7:27 ` alexs
2024-07-30 7:27 ` [RFC PATCH 17/18] mm: convert vmf.prealloc_pte to struct ptdesc pointer alexs
2 siblings, 0 replies; 6+ messages in thread
From: alexs @ 2024-07-30 7:27 UTC (permalink / raw)
To: Will Deacon, Aneesh Kumar K . V, Nick Piggin, Peter Zijlstra,
Russell King, Catalin Marinas, Brian Cain, WANG Xuerui,
Geert Uytterhoeven, Jonas Bonn, Stefan Kristiansson,
Stafford Horne, Michael Ellerman, Naveen N Rao, Paul Walmsley,
Albert Ou, Thomas Gleixner, Borislav Petkov, Dave Hansen, x86,
H . Peter Anvin, Andy Lutomirski, Bibo Mao, Baolin Wang,
linux-arch, linux-mm, linux-arm-kernel, linux-kernel, linux-csky,
linux-hexagon, loongarch, linux-m68k, linux-openrisc,
linuxppc-dev, linux-riscv, Heiko Carstens, Vasily Gorbik,
Christian Borntraeger, Sven Schnelle, Qi Zheng, Vishal Moola,
Aneesh Kumar K . V, Kemeng Shi, Lance Yang, Peter Xu, Barry Song,
linux-s390
Cc: Guo Ren, Christophe Leroy, Palmer Dabbelt, Mike Rapoport,
Oscar Salvador, Alexandre Ghiti, Jisheng Zhang, Samuel Holland,
Anup Patel, Josh Poimboeuf, Breno Leitao, Alexander Gordeev,
Gerald Schaefer, Hugh Dickins, David Hildenbrand, Ryan Roberts,
Matthew Wilcox, Alex Shi, Naveen N . Rao, linux-fsdevel,
Andrew Morton
From: Alex Shi <alexs@kernel.org>
A new step to replace pgtable_t by ptdesc, also a preparation to change
vmf.prealloc_pte to ptdesc too.
Signed-off-by: Alex Shi <alexs@kernel.org>
Cc: linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org
Cc: linux-fsdevel@vger.kernel.org
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Matthew Wilcox <willy@infradead.org>
---
mm/filemap.c | 2 +-
mm/internal.h | 2 +-
mm/memory.c | 8 ++++----
3 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index d62150418b91..3708ef71182e 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -3453,7 +3453,7 @@ static bool filemap_map_pmd(struct vm_fault *vmf, struct folio *folio,
}
if (pmd_none(*vmf->pmd) && vmf->prealloc_pte)
- pmd_install(mm, vmf->pmd, &vmf->prealloc_pte);
+ pmd_install(mm, vmf->pmd, (struct ptdesc **)&vmf->prealloc_pte);
return false;
}
diff --git a/mm/internal.h b/mm/internal.h
index 7a3bcc6d95e7..e4bc64d5176a 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -320,7 +320,7 @@ void folio_activate(struct folio *folio);
void free_pgtables(struct mmu_gather *tlb, struct ma_state *mas,
struct vm_area_struct *start_vma, unsigned long floor,
unsigned long ceiling, bool mm_wr_locked);
-void pmd_install(struct mm_struct *mm, pmd_t *pmd, pgtable_t *pte);
+void pmd_install(struct mm_struct *mm, pmd_t *pmd, struct ptdesc **pte);
struct zap_details;
void unmap_page_range(struct mmu_gather *tlb,
diff --git a/mm/memory.c b/mm/memory.c
index cbed8824059f..79685600d23f 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -418,7 +418,7 @@ void free_pgtables(struct mmu_gather *tlb, struct ma_state *mas,
} while (vma);
}
-void pmd_install(struct mm_struct *mm, pmd_t *pmd, pgtable_t *pte)
+void pmd_install(struct mm_struct *mm, pmd_t *pmd, struct ptdesc **pte)
{
spinlock_t *ptl = pmd_lock(mm, pmd);
@@ -438,7 +438,7 @@ void pmd_install(struct mm_struct *mm, pmd_t *pmd, pgtable_t *pte)
* smp_rmb() barriers in page table walking code.
*/
smp_wmb(); /* Could be smp_wmb__xxx(before|after)_spin_lock */
- pmd_populate(mm, pmd, (struct ptdesc *)(*pte));
+ pmd_populate(mm, pmd, *pte);
*pte = NULL;
}
spin_unlock(ptl);
@@ -450,7 +450,7 @@ int __pte_alloc(struct mm_struct *mm, pmd_t *pmd)
if (!ptdesc)
return -ENOMEM;
- pmd_install(mm, pmd, (pgtable_t *)&ptdesc);
+ pmd_install(mm, pmd, &ptdesc);
if (ptdesc)
pte_free(mm, ptdesc);
return 0;
@@ -4868,7 +4868,7 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
}
if (vmf->prealloc_pte)
- pmd_install(vma->vm_mm, vmf->pmd, &vmf->prealloc_pte);
+ pmd_install(vma->vm_mm, vmf->pmd, (struct ptdesc **)&vmf->prealloc_pte);
else if (unlikely(pte_alloc(vma->vm_mm, vmf->pmd)))
return VM_FAULT_OOM;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [RFC PATCH 17/18] mm: convert vmf.prealloc_pte to struct ptdesc pointer
[not found] ` <20240730072719.3715016-1-alexs@kernel.org>
2024-07-30 7:27 ` [RFC PATCH 14/18] mm/pgtable: use ptdesc in pgtable_trans_huge_deposit alexs
2024-07-30 7:27 ` [RFC PATCH 16/18] mm/pgtable: pass ptdesc to pmd_install alexs
@ 2024-07-30 7:27 ` alexs
2 siblings, 0 replies; 6+ messages in thread
From: alexs @ 2024-07-30 7:27 UTC (permalink / raw)
To: Will Deacon, Aneesh Kumar K . V, Nick Piggin, Peter Zijlstra,
Russell King, Catalin Marinas, Brian Cain, WANG Xuerui,
Geert Uytterhoeven, Jonas Bonn, Stefan Kristiansson,
Stafford Horne, Michael Ellerman, Naveen N Rao, Paul Walmsley,
Albert Ou, Thomas Gleixner, Borislav Petkov, Dave Hansen, x86,
H . Peter Anvin, Andy Lutomirski, Bibo Mao, Baolin Wang,
linux-arch, linux-mm, linux-arm-kernel, linux-kernel, linux-csky,
linux-hexagon, loongarch, linux-m68k, linux-openrisc,
linuxppc-dev, linux-riscv, Heiko Carstens, Vasily Gorbik,
Christian Borntraeger, Sven Schnelle, Qi Zheng, Vishal Moola,
Aneesh Kumar K . V, Kemeng Shi, Lance Yang, Peter Xu, Barry Song,
linux-s390
Cc: Guo Ren, Christophe Leroy, Palmer Dabbelt, Mike Rapoport,
Oscar Salvador, Alexandre Ghiti, Jisheng Zhang, Samuel Holland,
Anup Patel, Josh Poimboeuf, Breno Leitao, Alexander Gordeev,
Gerald Schaefer, Hugh Dickins, David Hildenbrand, Ryan Roberts,
Matthew Wilcox, Alex Shi, Naveen N . Rao, linux-fsdevel,
Andrew Morton
From: Alex Shi <alexs@kernel.org>
vmfs.prealloc_pte is a pointer to page table memory, so converter it to
struct ptdesc pointer.
Signed-off-by: Alex Shi <alexs@kernel.org>
Cc: linux-fsdevel@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
---
include/linux/mm.h | 2 +-
mm/filemap.c | 2 +-
mm/memory.c | 12 ++++++------
3 files changed, 8 insertions(+), 8 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 7424f964dff3..749d6dd311fa 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -567,7 +567,7 @@ struct vm_fault {
* Protects pte page table if 'pte'
* is not NULL, otherwise pmd.
*/
- pgtable_t prealloc_pte; /* Pre-allocated pte page table.
+ struct ptdesc *prealloc_pte; /* Pre-allocated pte page table.
* vm_ops->map_pages() sets up a page
* table from atomic context.
* do_fault_around() pre-allocates
diff --git a/mm/filemap.c b/mm/filemap.c
index 3708ef71182e..d62150418b91 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -3453,7 +3453,7 @@ static bool filemap_map_pmd(struct vm_fault *vmf, struct folio *folio,
}
if (pmd_none(*vmf->pmd) && vmf->prealloc_pte)
- pmd_install(mm, vmf->pmd, (struct ptdesc **)&vmf->prealloc_pte);
+ pmd_install(mm, vmf->pmd, &vmf->prealloc_pte);
return false;
}
diff --git a/mm/memory.c b/mm/memory.c
index 79685600d23f..1a5fb17ab045 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4648,7 +4648,7 @@ static vm_fault_t __do_fault(struct vm_fault *vmf)
* # flush A, B to clear the writeback
*/
if (pmd_none(*vmf->pmd) && !vmf->prealloc_pte) {
- vmf->prealloc_pte = ptdesc_page(pte_alloc_one(vma->vm_mm));
+ vmf->prealloc_pte = pte_alloc_one(vma->vm_mm);
if (!vmf->prealloc_pte)
return VM_FAULT_OOM;
}
@@ -4687,7 +4687,7 @@ static void deposit_prealloc_pte(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
- pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, page_ptdesc(vmf->prealloc_pte));
+ pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, vmf->prealloc_pte);
/*
* We are going to consume the prealloc table,
* count that as nr_ptes.
@@ -4726,7 +4726,7 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page)
* related to pte entry. Use the preallocated table for that.
*/
if (arch_needs_pgtable_deposit() && !vmf->prealloc_pte) {
- vmf->prealloc_pte = ptdesc_page(pte_alloc_one(vma->vm_mm));
+ vmf->prealloc_pte = pte_alloc_one(vma->vm_mm);
if (!vmf->prealloc_pte)
return VM_FAULT_OOM;
}
@@ -4868,7 +4868,7 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
}
if (vmf->prealloc_pte)
- pmd_install(vma->vm_mm, vmf->pmd, (struct ptdesc **)&vmf->prealloc_pte);
+ pmd_install(vma->vm_mm, vmf->pmd, &vmf->prealloc_pte);
else if (unlikely(pte_alloc(vma->vm_mm, vmf->pmd)))
return VM_FAULT_OOM;
}
@@ -5011,7 +5011,7 @@ static vm_fault_t do_fault_around(struct vm_fault *vmf)
pte_off + vma_pages(vmf->vma) - vma_off) - 1;
if (pmd_none(*vmf->pmd)) {
- vmf->prealloc_pte = ptdesc_page(pte_alloc_one(vmf->vma->vm_mm));
+ vmf->prealloc_pte = pte_alloc_one(vmf->vma->vm_mm);
if (!vmf->prealloc_pte)
return VM_FAULT_OOM;
}
@@ -5197,7 +5197,7 @@ static vm_fault_t do_fault(struct vm_fault *vmf)
/* preallocated pagetable is unused: free it */
if (vmf->prealloc_pte) {
- pte_free(vm_mm, page_ptdesc(vmf->prealloc_pte));
+ pte_free(vm_mm, vmf->prealloc_pte);
vmf->prealloc_pte = NULL;
}
return ret;
--
2.43.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
end of thread, other threads:[~2024-07-30 7:23 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20240730064712.3714387-1-alexs@kernel.org>
2024-07-30 6:46 ` [RFC PATCH 03/18] fs/dax: use ptdesc in dax_pmd_load_hole alexs
2024-07-30 6:47 ` [RFC PATCH 09/18] mm/pgtable: fully use ptdesc in pte_alloc_one series functions alexs
2024-07-30 6:47 ` [RFC PATCH 10/18] mm/pgtable: pass ptdesc to pte_free() alexs
[not found] ` <20240730072719.3715016-1-alexs@kernel.org>
2024-07-30 7:27 ` [RFC PATCH 14/18] mm/pgtable: use ptdesc in pgtable_trans_huge_deposit alexs
2024-07-30 7:27 ` [RFC PATCH 16/18] mm/pgtable: pass ptdesc to pmd_install alexs
2024-07-30 7:27 ` [RFC PATCH 17/18] mm: convert vmf.prealloc_pte to struct ptdesc pointer alexs
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).