From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BBBDAC64EC4 for ; Wed, 15 Feb 2023 20:09:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CA4D26B0078; Wed, 15 Feb 2023 15:09:32 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C2CDE6B007B; Wed, 15 Feb 2023 15:09:32 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AA61F6B007D; Wed, 15 Feb 2023 15:09:32 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 957576B0078 for ; Wed, 15 Feb 2023 15:09:32 -0500 (EST) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 53CDF12064E for ; Wed, 15 Feb 2023 20:09:32 +0000 (UTC) X-FDA: 80470616184.15.7371C1E Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf20.hostedemail.com (Postfix) with ESMTP id 8F07C1C000E for ; Wed, 15 Feb 2023 20:09:30 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=IAA2niVM; spf=none (imf20.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1676491770; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=HL+fzotLRQPwweMMbF3JhsBS1ohDo5FxCNgNopahTIo=; b=V+7QteGS6vKaCmRTZAoxd3kZD5Lp8XCwPYlNkAH4kfxlLBC04gHGew0Y0xayRqrvUYFvee On7cZr/G5XPzxHCV+46fR6KSW9LAweLW2UP3nABcrR7JWIxBBXF6sW+KHVeHEtbmBVsKSz mYe+V1dfXbSjEoWJ5Z76Va61Ug0tIIA= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=IAA2niVM; spf=none (imf20.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1676491770; a=rsa-sha256; cv=none; b=hsdKYvNU6OSlzD7Oow6vTIvLAHotTL4vOR/WEARav48hhsOT9Icgwc5I45CJwSN9z+EKvn Kb6rvPjHXSKmw1lKYEO3ao+sceY8xPHszCl8zHQVw2SRm3WPQVA+U8XsmBRnr1ke9PHq7Z VrPWF9bluShHa+bHdE3NVsYwRP/nOgA= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=HL+fzotLRQPwweMMbF3JhsBS1ohDo5FxCNgNopahTIo=; b=IAA2niVM5trdK75ZXymjCHccsn Sm5GLbnK5ojQScpqWcF+pp0XxEbugFAf4bX6i30k7h08K3eQfrvbw6WYm65CP+yM5BXxaLlbG2UY4 0DaLQaIxTWly3H8I7QkKKZ2NvOQ1rMjrNR4QQQDxfrn0xeI/943JgCae2FNLoc2uTHpS22YLEZfXI NcM9Z/eJ6/E5HQGL3RFyEmOWKVri4lGGkh0FAfrIhvruzI7Pv67LGe+RGPWjoDfgo6HB5m8MZmKvt jNh9tSTR7RNfGRYse6pdH2AF1UjoHMWVTUg5v5FO+8kKFTJp3Jm0SoYV/4hm2tUpfRtSumBq9PoRo BJHBp6xA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pSO5n-007lAt-1I; Wed, 15 Feb 2023 20:09:27 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, linux-mips@vger.kernel.org, Thomas Bogendoerfer , linux-arch@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 17/17] mips: Implement the new page table range API Date: Wed, 15 Feb 2023 20:09:20 +0000 Message-Id: <20230215200920.1849567-4-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230215200920.1849567-1-willy@infradead.org> References: <20230215000446.1655635-1-willy@infradead.org> <20230215200920.1849567-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: 8F07C1C000E X-Stat-Signature: g3mndtg687t443bxidh8c7gbjze7tjt7 X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1676491770-728576 X-HE-Meta: U2FsdGVkX1/DHtgey0OlgQsH6B4QPLq+F5eISCnDwIN2gf5AUUds1OvmUxzAk7xVO/dVeo274aGumJigMTK9f2em3dETzmVkvFPVoqspB/OBNIaTaHj0IUctTjvMaH89hax2XMUwHvLIolMH0LtF5Rk175QUc0CNjxixy9QzEM3PKP6cs+HC8Rl0zV4VbYF7AYMpCWTdscAA9DvF/InKETUdVSet9ImzHOWrtiUug3ll/5wWdmarEDQ7ioHAagy96SHqqNGImq7i8jlZpBRLI1vRDDIjAEiTs+x/kZAr8FqQUrtLfWiDDaENGLtS7/RVERMZ6xfJPrR18h28ZC1dRZDaA3MdiuJcUo1sQOHSHD8f1wAm06yjpP5zHNmiPmcdxK7UAtmWzTvEn0yK80ulRnBf0IPBQvXqpiq1DUSurBqaFPbAOrMbBAYVkFxtTz4piX4ePC5M8D6zwmsivbLpTWkGonXSQDbIcXIHGkYL5FqUfExm0/VYkqL196j9Be0H7YvkkPXv6jW4ua9NurNxsc25n5Gcvc94Z2WwlodCto94pJFrQfvxqWH9gmh7nkl+5yAzPPvXA3XxtfMBj95x8yAa2btZSp5cMkXRdRJCKl2pGm3HLIx9GbrWL35tTv9Pvat/k/ZwTSvnKZEicL9g3pqFJMqKnJCsZ7QmnBeq6Rda1NFXcIwGGORwDZIFPbCQqPhKH/S7xBygUZ5jx0p8eYjlGap1zTvOhvL2azuSsdhAwzGScDPa4+HS5t8Qdk35DcRExMw+PSqRE9UzYIzm7UezFWGM9/5F3qunUnXBwd36EQhTWUbGl1Y8TV5XqCglMO+bQ6EvvrmE0u9mTGFidDcpU4xBTIx06n384dK16y3zMnwVWmdBNKMgJGa+7NWGjE4eNhQ1sYJcb5f+xsnImwrV+dfvk0oEA9aSpUZL++AytH1ey/igUqEe0kPy2j+pWpUiBQaA+QkpjFj04gX iyhemdSy 8DBdo2iE6qXNL/qHGxrAzawIP3lne3uiyI6su+Mm5z2g8x8pwYF/Tz2yfBd6ieJoxA4w7suHA7bAVasSruEOCSwaQGbEg+ogacYzqqrZjtO+DtXyTR728fH4eImhXYW5alJK0aTvDmbdGZA8Sgbu8ViQzfQ5eKwux840ESwReac1ay3XJfUTbhUR/E6Ya4wI0C6kF X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add set_ptes(), update_mmu_cache_range(), flush_icache_pages() and flush_dcache_folio(). PG_arch_1 (aka PG_dcache_dirty) becomes a per-folio flag instead of per-page. Signed-off-by: Matthew Wilcox (Oracle) --- arch/mips/include/asm/cacheflush.h | 32 +++++++++++------ arch/mips/mm/c-r4k.c | 5 +-- arch/mips/mm/cache.c | 56 +++++++++++++++--------------- arch/mips/mm/init.c | 17 +++++---- 4 files changed, 63 insertions(+), 47 deletions(-) diff --git a/arch/mips/include/asm/cacheflush.h b/arch/mips/include/asm/cacheflush.h index b3dc9c589442..2683cade42ef 100644 --- a/arch/mips/include/asm/cacheflush.h +++ b/arch/mips/include/asm/cacheflush.h @@ -36,12 +36,12 @@ */ #define PG_dcache_dirty PG_arch_1 -#define Page_dcache_dirty(page) \ - test_bit(PG_dcache_dirty, &(page)->flags) -#define SetPageDcacheDirty(page) \ - set_bit(PG_dcache_dirty, &(page)->flags) -#define ClearPageDcacheDirty(page) \ - clear_bit(PG_dcache_dirty, &(page)->flags) +#define folio_test_dcache_dirty(folio) \ + test_bit(PG_dcache_dirty, &(folio)->flags) +#define folio_set_dcache_dirty(folio) \ + set_bit(PG_dcache_dirty, &(folio)->flags) +#define folio_clear_dcache_dirty(folio) \ + clear_bit(PG_dcache_dirty, &(folio)->flags) extern void (*flush_cache_all)(void); extern void (*__flush_cache_all)(void); @@ -50,15 +50,24 @@ extern void (*flush_cache_mm)(struct mm_struct *mm); extern void (*flush_cache_range)(struct vm_area_struct *vma, unsigned long start, unsigned long end); extern void (*flush_cache_page)(struct vm_area_struct *vma, unsigned long page, unsigned long pfn); -extern void __flush_dcache_page(struct page *page); +extern void __flush_dcache_pages(struct page *page, unsigned int nr); #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 +static inline void flush_dcache_folio(struct folio *folio) +{ + if (cpu_has_dc_aliases) + __flush_dcache_pages(&folio->page, folio_nr_pages(folio)); + else if (!cpu_has_ic_fills_f_dc) + folio_set_dcache_dirty(folio); +} +#define flush_dcache_folio flush_dcache_folio + static inline void flush_dcache_page(struct page *page) { if (cpu_has_dc_aliases) - __flush_dcache_page(page); + __flush_dcache_pages(page, 1); else if (!cpu_has_ic_fills_f_dc) - SetPageDcacheDirty(page); + folio_set_dcache_dirty(page_folio(page)); } #define flush_dcache_mmap_lock(mapping) do { } while (0) @@ -73,10 +82,11 @@ static inline void flush_anon_page(struct vm_area_struct *vma, __flush_anon_page(page, vmaddr); } -static inline void flush_icache_page(struct vm_area_struct *vma, - struct page *page) +static inline void flush_icache_pages(struct vm_area_struct *vma, + struct page *page, unsigned int nr) { } +#define flush_icache_page(vma, page) flush_icache_pages(vma, page, 1) extern void (*flush_icache_range)(unsigned long start, unsigned long end); extern void (*local_flush_icache_range)(unsigned long start, unsigned long end); diff --git a/arch/mips/mm/c-r4k.c b/arch/mips/mm/c-r4k.c index a549fa98c2f4..7d2a42f0cffd 100644 --- a/arch/mips/mm/c-r4k.c +++ b/arch/mips/mm/c-r4k.c @@ -679,13 +679,14 @@ static inline void local_r4k_flush_cache_page(void *args) if ((mm == current->active_mm) && (pte_val(*ptep) & _PAGE_VALID)) vaddr = NULL; else { + struct folio *folio = page_folio(page); /* * Use kmap_coherent or kmap_atomic to do flushes for * another ASID than the current one. */ map_coherent = (cpu_has_dc_aliases && - page_mapcount(page) && - !Page_dcache_dirty(page)); + folio_mapped(folio) && + !folio_test_dcache_dirty(folio)); if (map_coherent) vaddr = kmap_coherent(page, addr); else diff --git a/arch/mips/mm/cache.c b/arch/mips/mm/cache.c index 11b3e7ddafd5..0668435521fc 100644 --- a/arch/mips/mm/cache.c +++ b/arch/mips/mm/cache.c @@ -82,13 +82,15 @@ SYSCALL_DEFINE3(cacheflush, unsigned long, addr, unsigned long, bytes, return 0; } -void __flush_dcache_page(struct page *page) +void __flush_dcache_pages(struct page *page, unsigned int nr) { - struct address_space *mapping = page_mapping_file(page); + struct folio *folio = page_folio(page); + struct address_space *mapping = folio_flush_mapping(folio); unsigned long addr; + unsigned int i; if (mapping && !mapping_mapped(mapping)) { - SetPageDcacheDirty(page); + folio_set_dcache_dirty(folio); return; } @@ -97,25 +99,21 @@ void __flush_dcache_page(struct page *page) * case is for exec env/arg pages and those are %99 certainly going to * get faulted into the tlb (and thus flushed) anyways. */ - if (PageHighMem(page)) - addr = (unsigned long)kmap_atomic(page); - else - addr = (unsigned long)page_address(page); - - flush_data_cache_page(addr); - - if (PageHighMem(page)) - kunmap_atomic((void *)addr); + for (i = 0; i < nr; i++) { + addr = (unsigned long)kmap_local_page(page + i); + flush_data_cache_page(addr); + kunmap_local((void *)addr); + } } - -EXPORT_SYMBOL(__flush_dcache_page); +EXPORT_SYMBOL(__flush_dcache_pages); void __flush_anon_page(struct page *page, unsigned long vmaddr) { unsigned long addr = (unsigned long) page_address(page); + struct folio *folio = page_folio(page); if (pages_do_alias(addr, vmaddr)) { - if (page_mapcount(page) && !Page_dcache_dirty(page)) { + if (folio_mapped(folio) && !folio_test_dcache_dirty(folio)) { void *kaddr; kaddr = kmap_coherent(page, vmaddr); @@ -130,27 +128,29 @@ EXPORT_SYMBOL(__flush_anon_page); void __update_cache(unsigned long address, pte_t pte) { - struct page *page; + struct folio *folio; unsigned long pfn, addr; int exec = !pte_no_exec(pte) && !cpu_has_ic_fills_f_dc; + unsigned int i; pfn = pte_pfn(pte); if (unlikely(!pfn_valid(pfn))) return; - page = pfn_to_page(pfn); - if (Page_dcache_dirty(page)) { - if (PageHighMem(page)) - addr = (unsigned long)kmap_atomic(page); - else - addr = (unsigned long)page_address(page); - - if (exec || pages_do_alias(addr, address & PAGE_MASK)) - flush_data_cache_page(addr); - if (PageHighMem(page)) - kunmap_atomic((void *)addr); + folio = page_folio(pfn_to_page(pfn)); + address &= PAGE_MASK; + address -= offset_in_folio(folio, pfn << PAGE_SHIFT); + + if (folio_test_dcache_dirty(folio)) { + for (i = 0; i < folio_nr_pages(folio); i++) { + addr = (unsigned long)kmap_local_folio(folio, i); - ClearPageDcacheDirty(page); + if (exec || pages_do_alias(addr, address)) + flush_data_cache_page(addr); + kunmap_local((void *)addr); + address += PAGE_SIZE; + } + folio_clear_dcache_dirty(folio); } } diff --git a/arch/mips/mm/init.c b/arch/mips/mm/init.c index 5a8002839550..19d4ca3b3fbd 100644 --- a/arch/mips/mm/init.c +++ b/arch/mips/mm/init.c @@ -88,7 +88,7 @@ static void *__kmap_pgprot(struct page *page, unsigned long addr, pgprot_t prot) pte_t pte; int tlbidx; - BUG_ON(Page_dcache_dirty(page)); + BUG_ON(folio_test_dcache_dirty(page_folio(page))); preempt_disable(); pagefault_disable(); @@ -169,11 +169,12 @@ void kunmap_coherent(void) void copy_user_highpage(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) { + struct folio *src = page_folio(from); void *vfrom, *vto; vto = kmap_atomic(to); if (cpu_has_dc_aliases && - page_mapcount(from) && !Page_dcache_dirty(from)) { + folio_mapped(src) && !folio_test_dcache_dirty(src)) { vfrom = kmap_coherent(from, vaddr); copy_page(vto, vfrom); kunmap_coherent(); @@ -194,15 +195,17 @@ void copy_to_user_page(struct vm_area_struct *vma, struct page *page, unsigned long vaddr, void *dst, const void *src, unsigned long len) { + struct folio *folio = page_folio(page); + if (cpu_has_dc_aliases && - page_mapcount(page) && !Page_dcache_dirty(page)) { + folio_mapped(folio) && !folio_test_dcache_dirty(folio)) { void *vto = kmap_coherent(page, vaddr) + (vaddr & ~PAGE_MASK); memcpy(vto, src, len); kunmap_coherent(); } else { memcpy(dst, src, len); if (cpu_has_dc_aliases) - SetPageDcacheDirty(page); + folio_set_dcache_dirty(folio); } if (vma->vm_flags & VM_EXEC) flush_cache_page(vma, vaddr, page_to_pfn(page)); @@ -212,15 +215,17 @@ void copy_from_user_page(struct vm_area_struct *vma, struct page *page, unsigned long vaddr, void *dst, const void *src, unsigned long len) { + struct folio *folio = page_folio(page); + if (cpu_has_dc_aliases && - page_mapcount(page) && !Page_dcache_dirty(page)) { + folio_mapped(folio) && !folio_test_dcache_dirty(folio)) { void *vfrom = kmap_coherent(page, vaddr) + (vaddr & ~PAGE_MASK); memcpy(dst, vfrom, len); kunmap_coherent(); } else { memcpy(dst, src, len); if (cpu_has_dc_aliases) - SetPageDcacheDirty(page); + folio_set_dcache_dirty(folio); } } EXPORT_SYMBOL_GPL(copy_from_user_page); -- 2.39.1