* [PATCH 12/7] hexagon: Implement the new page table range API [not found] ` <20230215000446.1655635-1-willy@infradead.org> @ 2023-02-15 0:04 ` Matthew Wilcox (Oracle) 2023-02-15 16:22 ` Brian Cain 0 siblings, 1 reply; 2+ messages in thread From: Matthew Wilcox (Oracle) @ 2023-02-15 0:04 UTC (permalink / raw) To: linux-mm, Brian Cain, linux-hexagon, linux-arch; +Cc: Matthew Wilcox (Oracle) Add set_ptes() and update_mmu_cache_range(). Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> --- arch/hexagon/include/asm/cacheflush.h | 7 +++++-- arch/hexagon/include/asm/pgtable.h | 16 ++++++++++++++-- 2 files changed, 19 insertions(+), 4 deletions(-) diff --git a/arch/hexagon/include/asm/cacheflush.h b/arch/hexagon/include/asm/cacheflush.h index 6eff0730e6ef..63ca314ede89 100644 --- a/arch/hexagon/include/asm/cacheflush.h +++ b/arch/hexagon/include/asm/cacheflush.h @@ -58,12 +58,15 @@ extern void flush_cache_all_hexagon(void); * clean the cache when the PTE is set. * */ -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) { /* generic_ptrace_pokedata doesn't wind up here, does it? */ } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) + void copy_to_user_page(struct vm_area_struct *vma, struct page *page, unsigned long vaddr, void *dst, void *src, int len); #define copy_to_user_page copy_to_user_page diff --git a/arch/hexagon/include/asm/pgtable.h b/arch/hexagon/include/asm/pgtable.h index 59393613d086..f58f1d920769 100644 --- a/arch/hexagon/include/asm/pgtable.h +++ b/arch/hexagon/include/asm/pgtable.h @@ -346,12 +346,24 @@ static inline int pte_exec(pte_t pte) #define set_pmd(pmdptr, pmdval) (*(pmdptr) = (pmdval)) /* - * set_pte_at - update page table and do whatever magic may be + * set_ptes - update page table and do whatever magic may be * necessary to make the underlying hardware/firmware take note. * * VM may require a virtual instruction to alert the MMU. */ -#define set_pte_at(mm, addr, ptep, pte) set_pte(ptep, pte) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += PAGE_SIZE; + } +} + +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) static inline unsigned long pmd_page_vaddr(pmd_t pmd) { -- 2.39.1 ^ permalink raw reply related [flat|nested] 2+ messages in thread
* RE: [PATCH 12/7] hexagon: Implement the new page table range API 2023-02-15 0:04 ` [PATCH 12/7] hexagon: Implement the new page table range API Matthew Wilcox (Oracle) @ 2023-02-15 16:22 ` Brian Cain 0 siblings, 0 replies; 2+ messages in thread From: Brian Cain @ 2023-02-15 16:22 UTC (permalink / raw) To: Matthew Wilcox (Oracle), linux-mm@kvack.org, linux-hexagon@vger.kernel.org, linux-arch@vger.kernel.org > -----Original Message----- > From: Matthew Wilcox (Oracle) <willy@infradead.org> ... > > Add set_ptes() and update_mmu_cache_range(). > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> > --- > arch/hexagon/include/asm/cacheflush.h | 7 +++++-- > arch/hexagon/include/asm/pgtable.h | 16 ++++++++++++++-- > 2 files changed, 19 insertions(+), 4 deletions(-) > > diff --git a/arch/hexagon/include/asm/cacheflush.h > b/arch/hexagon/include/asm/cacheflush.h > index 6eff0730e6ef..63ca314ede89 100644 > --- a/arch/hexagon/include/asm/cacheflush.h > +++ b/arch/hexagon/include/asm/cacheflush.h > @@ -58,12 +58,15 @@ extern void flush_cache_all_hexagon(void); > * clean the cache when the PTE is set. > * > */ > -static inline void update_mmu_cache(struct vm_area_struct *vma, > - unsigned long address, pte_t *ptep) > +static inline void update_mmu_cache_range(struct vm_area_struct *vma, > + unsigned long address, pte_t *ptep, unsigned int nr) > { > /* generic_ptrace_pokedata doesn't wind up here, does it? */ > } > > +#define update_mmu_cache(vma, addr, ptep) \ > + update_mmu_cache_range(vma, addr, ptep, 1) > + > void copy_to_user_page(struct vm_area_struct *vma, struct page *page, > unsigned long vaddr, void *dst, void *src, int len); > #define copy_to_user_page copy_to_user_page > diff --git a/arch/hexagon/include/asm/pgtable.h > b/arch/hexagon/include/asm/pgtable.h > index 59393613d086..f58f1d920769 100644 > --- a/arch/hexagon/include/asm/pgtable.h > +++ b/arch/hexagon/include/asm/pgtable.h > @@ -346,12 +346,24 @@ static inline int pte_exec(pte_t pte) > #define set_pmd(pmdptr, pmdval) (*(pmdptr) = (pmdval)) > > /* > - * set_pte_at - update page table and do whatever magic may be > + * set_ptes - update page table and do whatever magic may be > * necessary to make the underlying hardware/firmware take note. > * > * VM may require a virtual instruction to alert the MMU. > */ > -#define set_pte_at(mm, addr, ptep, pte) set_pte(ptep, pte) > +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, > + pte_t *ptep, pte_t pte, unsigned int nr) > +{ > + for (;;) { > + set_pte(ptep, pte); > + if (--nr == 0) > + break; > + ptep++; > + pte_val(pte) += PAGE_SIZE; > + } > +} > + > +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) > > static inline unsigned long pmd_page_vaddr(pmd_t pmd) > { > -- > 2.39.1 Acked-by: Brian Cain <bcain@quicinc.com> ^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2023-02-15 16:22 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20230211033948.891959-1-willy@infradead.org>
[not found] ` <20230215000446.1655635-1-willy@infradead.org>
2023-02-15 0:04 ` [PATCH 12/7] hexagon: Implement the new page table range API Matthew Wilcox (Oracle)
2023-02-15 16:22 ` Brian Cain
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).