* [PATCH 15/17] m68k: Implement the new page table range API [not found] ` <20230215200920.1849567-1-willy@infradead.org> @ 2023-02-15 20:09 ` Matthew Wilcox (Oracle) 2023-02-16 0:59 ` Michael Schmitz 0 siblings, 1 reply; 5+ messages in thread From: Matthew Wilcox (Oracle) @ 2023-02-15 20:09 UTC (permalink / raw) To: linux-mm, linux-m68k, Geert Uytterhoeven, linux-arch Cc: Matthew Wilcox (Oracle) Add set_ptes(), update_mmu_cache_range(), flush_icache_pages() and flush_dcache_folio(). I'm not entirely certain that the 040/060 case in __flush_pages_to_ram() is correct. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> --- arch/m68k/include/asm/cacheflush_mm.h | 12 ++++++++---- arch/m68k/include/asm/pgtable_mm.h | 21 ++++++++++++++++++--- arch/m68k/mm/motorola.c | 2 +- 3 files changed, 27 insertions(+), 8 deletions(-) diff --git a/arch/m68k/include/asm/cacheflush_mm.h b/arch/m68k/include/asm/cacheflush_mm.h index 1ac55e7b47f0..2244c35178d0 100644 --- a/arch/m68k/include/asm/cacheflush_mm.h +++ b/arch/m68k/include/asm/cacheflush_mm.h @@ -220,13 +220,13 @@ static inline void flush_cache_page(struct vm_area_struct *vma, unsigned long vm /* Push the page at kernel virtual address and clear the icache */ /* RZ: use cpush %bc instead of cpush %dc, cinv %ic */ -static inline void __flush_page_to_ram(void *vaddr) +static inline void __flush_pages_to_ram(void *vaddr, unsigned int nr) { if (CPU_IS_COLDFIRE) { unsigned long addr, start, end; addr = ((unsigned long) vaddr) & ~(PAGE_SIZE - 1); start = addr & ICACHE_SET_MASK; - end = (addr + PAGE_SIZE - 1) & ICACHE_SET_MASK; + end = (addr + nr * PAGE_SIZE - 1) & ICACHE_SET_MASK; if (start > end) { flush_cf_bcache(0, end); end = ICACHE_MAX_ADDR; @@ -249,10 +249,14 @@ static inline void __flush_page_to_ram(void *vaddr) } #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 -#define flush_dcache_page(page) __flush_page_to_ram(page_address(page)) +#define flush_dcache_page(page) __flush_pages_to_ram(page_address(page), 1) +#define flush_dcache_folio(folio) \ + __flush_pages_to_ram(folio_address(folio), folio_nr_pages(folio)) #define flush_dcache_mmap_lock(mapping) do { } while (0) #define flush_dcache_mmap_unlock(mapping) do { } while (0) -#define flush_icache_page(vma, page) __flush_page_to_ram(page_address(page)) +#define flush_icache_pages(vma, page, nr) \ + __flush_pages_to_ram(page_address(page), nr) +#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) extern void flush_icache_user_page(struct vm_area_struct *vma, struct page *page, unsigned long addr, int len); diff --git a/arch/m68k/include/asm/pgtable_mm.h b/arch/m68k/include/asm/pgtable_mm.h index b93c41fe2067..400206c17c97 100644 --- a/arch/m68k/include/asm/pgtable_mm.h +++ b/arch/m68k/include/asm/pgtable_mm.h @@ -31,8 +31,20 @@ do{ \ *(pteptr) = (pteval); \ } while(0) -#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval) +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte, unsigned int nr) +{ + for (;;) { + set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + pte_val(pte) += PAGE_SIZE; + } +} + +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) /* PMD_SHIFT determines the size of the area a second-level page table can map */ #if CONFIG_PGTABLE_LEVELS == 3 @@ -138,11 +150,14 @@ extern void kernel_set_cachemode(void *addr, unsigned long size, int cmode); * tables contain all the necessary information. The Sun3 does, but * they are updated on demand. */ -static inline void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep) +static inline void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr) { } +#define update_mmu_cache(vma, addr, ptep) \ + update_mmu_cache_range(vma, addr, ptep, 1) + #endif /* !__ASSEMBLY__ */ /* MMU-specific headers */ diff --git a/arch/m68k/mm/motorola.c b/arch/m68k/mm/motorola.c index 2a375637e007..7784d0fcdf6e 100644 --- a/arch/m68k/mm/motorola.c +++ b/arch/m68k/mm/motorola.c @@ -81,7 +81,7 @@ static inline void cache_page(void *vaddr) void mmu_page_ctor(void *page) { - __flush_page_to_ram(page); + __flush_pages_to_ram(page, 1); flush_tlb_kernel_page(page); nocache_page(page); } -- 2.39.1 ^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH 15/17] m68k: Implement the new page table range API 2023-02-15 20:09 ` [PATCH 15/17] m68k: Implement the new page table range API Matthew Wilcox (Oracle) @ 2023-02-16 0:59 ` Michael Schmitz 2023-02-16 4:26 ` Matthew Wilcox 0 siblings, 1 reply; 5+ messages in thread From: Michael Schmitz @ 2023-02-16 0:59 UTC (permalink / raw) To: Matthew Wilcox (Oracle), linux-mm, linux-m68k, Geert Uytterhoeven, linux-arch Matthew, On 16/02/23 09:09, Matthew Wilcox (Oracle) wrote: > Add set_ptes(), update_mmu_cache_range(), flush_icache_pages() and > flush_dcache_folio(). I'm not entirely certain that the 040/060 case > in __flush_pages_to_ram() is correct. I'm pretty sure you need to iterate to hit each of the pages - the code as is will only push cache entries for the first page. Quoting the 040 UM: "Both instructions [cinv, cpush] allow operation on a single cache line, all cache lines in a specific page, or an entire cache, and can select one or both caches for the operation. For line and page operations, a physical address in an address register specifies the memory address." Cheers, Michael > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> > --- > arch/m68k/include/asm/cacheflush_mm.h | 12 ++++++++---- > arch/m68k/include/asm/pgtable_mm.h | 21 ++++++++++++++++++--- > arch/m68k/mm/motorola.c | 2 +- > 3 files changed, 27 insertions(+), 8 deletions(-) > > diff --git a/arch/m68k/include/asm/cacheflush_mm.h b/arch/m68k/include/asm/cacheflush_mm.h > index 1ac55e7b47f0..2244c35178d0 100644 > --- a/arch/m68k/include/asm/cacheflush_mm.h > +++ b/arch/m68k/include/asm/cacheflush_mm.h > @@ -220,13 +220,13 @@ static inline void flush_cache_page(struct vm_area_struct *vma, unsigned long vm > > /* Push the page at kernel virtual address and clear the icache */ > /* RZ: use cpush %bc instead of cpush %dc, cinv %ic */ > -static inline void __flush_page_to_ram(void *vaddr) > +static inline void __flush_pages_to_ram(void *vaddr, unsigned int nr) > { > if (CPU_IS_COLDFIRE) { > unsigned long addr, start, end; > addr = ((unsigned long) vaddr) & ~(PAGE_SIZE - 1); > start = addr & ICACHE_SET_MASK; > - end = (addr + PAGE_SIZE - 1) & ICACHE_SET_MASK; > + end = (addr + nr * PAGE_SIZE - 1) & ICACHE_SET_MASK; > if (start > end) { > flush_cf_bcache(0, end); > end = ICACHE_MAX_ADDR; > @@ -249,10 +249,14 @@ static inline void __flush_page_to_ram(void *vaddr) > } > > #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 > -#define flush_dcache_page(page) __flush_page_to_ram(page_address(page)) > +#define flush_dcache_page(page) __flush_pages_to_ram(page_address(page), 1) > +#define flush_dcache_folio(folio) \ > + __flush_pages_to_ram(folio_address(folio), folio_nr_pages(folio)) > #define flush_dcache_mmap_lock(mapping) do { } while (0) > #define flush_dcache_mmap_unlock(mapping) do { } while (0) > -#define flush_icache_page(vma, page) __flush_page_to_ram(page_address(page)) > +#define flush_icache_pages(vma, page, nr) \ > + __flush_pages_to_ram(page_address(page), nr) > +#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) > > extern void flush_icache_user_page(struct vm_area_struct *vma, struct page *page, > unsigned long addr, int len); > diff --git a/arch/m68k/include/asm/pgtable_mm.h b/arch/m68k/include/asm/pgtable_mm.h > index b93c41fe2067..400206c17c97 100644 > --- a/arch/m68k/include/asm/pgtable_mm.h > +++ b/arch/m68k/include/asm/pgtable_mm.h > @@ -31,8 +31,20 @@ > do{ \ > *(pteptr) = (pteval); \ > } while(0) > -#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval) > > +static inline void set_ptes(struct mm_struct *mm, unsigned long addr, > + pte_t *ptep, pte_t pte, unsigned int nr) > +{ > + for (;;) { > + set_pte(ptep, pte); > + if (--nr == 0) > + break; > + ptep++; > + pte_val(pte) += PAGE_SIZE; > + } > +} > + > +#define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1) > > /* PMD_SHIFT determines the size of the area a second-level page table can map */ > #if CONFIG_PGTABLE_LEVELS == 3 > @@ -138,11 +150,14 @@ extern void kernel_set_cachemode(void *addr, unsigned long size, int cmode); > * tables contain all the necessary information. The Sun3 does, but > * they are updated on demand. > */ > -static inline void update_mmu_cache(struct vm_area_struct *vma, > - unsigned long address, pte_t *ptep) > +static inline void update_mmu_cache_range(struct vm_area_struct *vma, > + unsigned long address, pte_t *ptep, unsigned int nr) > { > } > > +#define update_mmu_cache(vma, addr, ptep) \ > + update_mmu_cache_range(vma, addr, ptep, 1) > + > #endif /* !__ASSEMBLY__ */ > > /* MMU-specific headers */ > diff --git a/arch/m68k/mm/motorola.c b/arch/m68k/mm/motorola.c > index 2a375637e007..7784d0fcdf6e 100644 > --- a/arch/m68k/mm/motorola.c > +++ b/arch/m68k/mm/motorola.c > @@ -81,7 +81,7 @@ static inline void cache_page(void *vaddr) > > void mmu_page_ctor(void *page) > { > - __flush_page_to_ram(page); > + __flush_pages_to_ram(page, 1); > flush_tlb_kernel_page(page); > nocache_page(page); > } ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH 15/17] m68k: Implement the new page table range API 2023-02-16 0:59 ` Michael Schmitz @ 2023-02-16 4:26 ` Matthew Wilcox 2023-02-16 7:55 ` Geert Uytterhoeven 2023-02-16 22:03 ` Michael Schmitz 0 siblings, 2 replies; 5+ messages in thread From: Matthew Wilcox @ 2023-02-16 4:26 UTC (permalink / raw) To: Michael Schmitz; +Cc: linux-mm, linux-m68k, Geert Uytterhoeven, linux-arch On Thu, Feb 16, 2023 at 01:59:44PM +1300, Michael Schmitz wrote: > Matthew, > > On 16/02/23 09:09, Matthew Wilcox (Oracle) wrote: > > Add set_ptes(), update_mmu_cache_range(), flush_icache_pages() and > > flush_dcache_folio(). I'm not entirely certain that the 040/060 case > > in __flush_pages_to_ram() is correct. > > I'm pretty sure you need to iterate to hit each of the pages - the code as > is will only push cache entries for the first page. > > Quoting the 040 UM: > > "Both instructions [cinv, cpush] allow operation on a single cache line, all > cache lines in a specific page, or an entire cache, and can select one or > both caches for the operation. For line and page operations, a physical > address in an address register specifies the memory address." I actually found that! What I didn't find was how to tell if this cpush insn is the one which is operating on a single cache line, a single page, or the entire cache. So I should do a loop around this asm and call it once for each page we're flushing? ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH 15/17] m68k: Implement the new page table range API 2023-02-16 4:26 ` Matthew Wilcox @ 2023-02-16 7:55 ` Geert Uytterhoeven 2023-02-16 22:03 ` Michael Schmitz 1 sibling, 0 replies; 5+ messages in thread From: Geert Uytterhoeven @ 2023-02-16 7:55 UTC (permalink / raw) To: Matthew Wilcox; +Cc: Michael Schmitz, linux-mm, linux-m68k, linux-arch Hi Matthew, On Thu, Feb 16, 2023 at 5:26 AM Matthew Wilcox <willy@infradead.org> wrote: > On Thu, Feb 16, 2023 at 01:59:44PM +1300, Michael Schmitz wrote: > > On 16/02/23 09:09, Matthew Wilcox (Oracle) wrote: > > > Add set_ptes(), update_mmu_cache_range(), flush_icache_pages() and > > > flush_dcache_folio(). I'm not entirely certain that the 040/060 case > > > in __flush_pages_to_ram() is correct. > > > > I'm pretty sure you need to iterate to hit each of the pages - the code as > > is will only push cache entries for the first page. > > > > Quoting the 040 UM: > > > > "Both instructions [cinv, cpush] allow operation on a single cache line, all > > cache lines in a specific page, or an entire cache, and can select one or > > both caches for the operation. For line and page operations, a physical > > address in an address register specifies the memory address." > > I actually found that! What I didn't find was how to tell if this > cpush insn is the one which is operating on a single cache line, > a single page, or the entire cache. cpushl (line), cpushp (page), cpusha (all). Same for cinv. Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say "programmer" or something like that. -- Linus Torvalds ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH 15/17] m68k: Implement the new page table range API 2023-02-16 4:26 ` Matthew Wilcox 2023-02-16 7:55 ` Geert Uytterhoeven @ 2023-02-16 22:03 ` Michael Schmitz 1 sibling, 0 replies; 5+ messages in thread From: Michael Schmitz @ 2023-02-16 22:03 UTC (permalink / raw) To: Matthew Wilcox; +Cc: linux-mm, linux-m68k, Geert Uytterhoeven, linux-arch Hi Matthew, On 16/02/23 17:26, Matthew Wilcox wrote: > On Thu, Feb 16, 2023 at 01:59:44PM +1300, Michael Schmitz wrote: >> Matthew, >> >> On 16/02/23 09:09, Matthew Wilcox (Oracle) wrote: >>> Add set_ptes(), update_mmu_cache_range(), flush_icache_pages() and >>> flush_dcache_folio(). I'm not entirely certain that the 040/060 case >>> in __flush_pages_to_ram() is correct. >> I'm pretty sure you need to iterate to hit each of the pages - the code as >> is will only push cache entries for the first page. >> >> Quoting the 040 UM: >> >> "Both instructions [cinv, cpush] allow operation on a single cache line, all >> cache lines in a specific page, or an entire cache, and can select one or >> both caches for the operation. For line and page operations, a physical >> address in an address register specifies the memory address." > I actually found that! What I didn't find was how to tell if this > cpush insn is the one which is operating on a single cache line, > a single page, or the entire cache. > > So I should do a loop around this asm and call it once for each page > we're flushing? Yes, that's the idea. I'm uncertain whether contiguous virtual pages are always guaranteed to have contiguous physical mappings, so no point in trying to 'optimize' and shift the loop into inline assembly. Cheers, Michael > ^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2023-02-16 22:03 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20230215000446.1655635-1-willy@infradead.org>
[not found] ` <20230215200920.1849567-1-willy@infradead.org>
2023-02-15 20:09 ` [PATCH 15/17] m68k: Implement the new page table range API Matthew Wilcox (Oracle)
2023-02-16 0:59 ` Michael Schmitz
2023-02-16 4:26 ` Matthew Wilcox
2023-02-16 7:55 ` Geert Uytterhoeven
2023-02-16 22:03 ` Michael Schmitz
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox