From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F1F21C61DA4 for ; Wed, 15 Mar 2023 10:07:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 84CC16B007E; Wed, 15 Mar 2023 06:07:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7FA236B0080; Wed, 15 Mar 2023 06:07:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6C3246B0081; Wed, 15 Mar 2023 06:07:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 5C6B96B007E for ; Wed, 15 Mar 2023 06:07:23 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 22C07120FE9 for ; Wed, 15 Mar 2023 10:07:23 +0000 (UTC) X-FDA: 80570705166.10.334B9FE Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by imf03.hostedemail.com (Postfix) with ESMTP id 40EFC2001D for ; Wed, 15 Mar 2023 10:07:21 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=ZyvkiQnD; spf=pass (imf03.hostedemail.com: domain of rppt@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678874841; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=GXoJE0XyT8uROhb0WN14g69Ike1JyrHboXgTcj82+Wo=; b=trwac4TJH8pNtMyxsnE4Vy6bF5UbzqHYNDZef0FFAOEAEy+wMq5v24IPhZAnQCFIH5N2sK AVfkmViLFdF21bAO5Og1mm2vc1/10NmwpNos+vvszFLCZP/xeddXrpCDyfIp+lQhaXDvwy X9hlDY60YwMcc6mZu8MbAtljWymoX0g= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=ZyvkiQnD; spf=pass (imf03.hostedemail.com: domain of rppt@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678874841; a=rsa-sha256; cv=none; b=cw8cvnvuluDWS/fszZiHjVTEWo0T0vK3+P774aS+kBVY8hftbsmF1Ngd/d98GsmPfzweQB ksyN/Q975T8mCkE6BzBz8WStr22vA2Wu/xlFOQvRmS8h/a9BXXtqcdfOqR49ZQr0BEQioO xSd9RS5jM4Ckl5sE40cFpt+G5i6OrMI= Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id A9BDFB81DBC; Wed, 15 Mar 2023 10:07:19 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 213D2C433D2; Wed, 15 Mar 2023 10:07:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678874838; bh=bdbwPo52/Ppp+QyPqw237wZWKSnBlkdHlTk0ENeNb/A=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=ZyvkiQnDQbYDPMcUjI6+iuVI5RCFEjszOjFF+VhMTt7FVKPxZNMk6gFjMgIjhzN7l Zvcn7Oxzf5yWR8BllhU+NahKG1sExq1bpZDhpCzmg9+10tZgFXSkXUEKkUypGNxSfK dEdMeA7NCtj3fKbenKhnMTbtULmPax5c2w1stHzzFwcp8UZ2OFsm+W0Z4mHSW00ToH awXTUSTyMhRbVZ4J/EyRTdWzpcOiPBg9TnWRVQIRZSO5UysDcAKHNwqxenKJk9g25H 3F1uFsJcSfZmnfKMSKUJPyEqw4ulY7JutOnSuCgK0Fp6oWVAlI2QklzVaFs0P42CB7 fzV55O5eLddHQ== Date: Wed, 15 Mar 2023 12:07:05 +0200 From: Mike Rapoport To: "Matthew Wilcox (Oracle)" Cc: linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huacai Chen , WANG Xuerui , loongarch@lists.linux.dev Subject: Re: [PATCH v4 13/36] loongarch: Implement the new page table range API Message-ID: References: <20230315051444.3229621-1-willy@infradead.org> <20230315051444.3229621-14-willy@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230315051444.3229621-14-willy@infradead.org> X-Rspamd-Queue-Id: 40EFC2001D X-Stat-Signature: m4ou3umno4chh69wnitk4raewpo9pjxk X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1678874841-273622 X-HE-Meta: U2FsdGVkX1+dGnihzqs4/0xfUX3wSsOQskfX/F0lZnu6VzMNLX3Z2nC0UaC+Ck0SssYXUc3rkULUI5iqeJocUB8V8rKNpIUm08pkJtcM0ctinpQrWBo+3g804/UvRVUb8gO/9Sws738chM054US/cwdEXDrMwzoycu8WMUvGfwgJtkU+rBO07gRZ6Rb6paWotjCgtjZX7yCjGvPYECGbhXwAsL4k+U82bN+xU7EGwZ1i5qUWTz+dRUj99/sGNnhvsE54Xp/bkRmQitu8yOSjYI2w8UQ9mGdz8dEKjbdKPpwqEmCk0sOPWBOW65Zasn8FctFJEur3jo8Q0ep1xFSTr5VnROjriaJbqJNNofhKT2oXB1xP7xYse7+8PZtMGJhQ58GG+/T/Fy3N1KOTSG226R/OKGLpsiGiwdKCL0W2Eu9vB18DaeXOsbjKohZ7p9+oq0Qp2axxZ39TjIJxCukwa2eemsv/pktdJq1QCrYRO2bb3PPaZlYEf3MpnMeV6LiNAkbd1HExQqNaNUbJScUoH4FoIBp0gbrn/uCWU0p0KzA2yx2eLA11CKWs8NaEFpufln39YOhTP6TB13uzuUoiHECaf6UeH/mC/Umu7Wlx3j9ShsshMxI1g/9CFoRyk8Dvh3eNLMho2CJPl5SzC+88wNYPLXiw0nFCHOTRqlOPqv65tq/YCX0Yx3dRKvMtIb6edIp1x5UOVyP08sV5PFUIfotPmqAoHQpG0FgcqNaVOSviHDIyPjuqRaTxzoJE2vv2+4WNWkJ829r7lj4rQaSCSw8WnnWYmX+1as3HhYDu/4n432IZF7hoV6yzJ8rgqGlmXzHrwMRQ+mwVDEB/hyXPVo9zGS8pYzINOjYdNWCPRd9HnjV6UX9qbSu3aDIz3QqVYE+qQmYh2hlEBaehcF8AeFd4rkwIyGMdpWkH2BgYBX0Y7QymvoR+1GbZQnKe+hhMoYLDV+opKgP457spRXC bcOyJCCl 2291LVPilg0X/kCUI2ijmjfIapgh/n9SYPOU8VBj5JqG12BbZmBqjyJNW1LGG8eM2ALvq8gVsBVLIxsmlEMbi519fg2LEUKAwCCxgKeb8ipAQgSE3Y+/k6/k7dz8NBwtHvP3dy/L+PaJIHwL5m9jg1t/XAOrpj0iMTG5lLUIuivBbZLD5JGFjhDN80sqCSQknI/STdBBwvRvGws27syYJXec9R3hgTs8Nj1KG61yOWFTJqp25/RbeN2rDo54+vj4uKgwk80Q/5bOhByIezJNN5ypjGgfq/yjlEpiKjVTgc+o2+cqrs8lVnCasMzkaifRRgeWM1HHRRVWOV1pNddWBKBknmf4NvoiG2EWX6A9F5tQsQOHT3s26hYIR+4YFoMcSb7HR0IU3vq/IEXzIRF3nx7XcjOf1AmuATWA/3Q9SyVFvqVI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Mar 15, 2023 at 05:14:21AM +0000, Matthew Wilcox (Oracle) wrote: > Add update_mmu_cache_range() and change _PFN_SHIFT to PFN_PTE_SHIFT. > It would probably be more efficient to implement __update_tlb() by > flushing the entire folio instead of calling __update_tlb() N times, > but I'll leave that for someone who understands the architecture better. > > Signed-off-by: Matthew Wilcox (Oracle) > Cc: Huacai Chen > Cc: WANG Xuerui > Cc: loongarch@lists.linux.dev Acked-by: Mike Rapoport (IBM) > --- > arch/loongarch/include/asm/cacheflush.h | 2 ++ > arch/loongarch/include/asm/pgtable-bits.h | 4 ++-- > arch/loongarch/include/asm/pgtable.h | 28 ++++++++++++----------- > arch/loongarch/mm/pgtable.c | 2 +- > arch/loongarch/mm/tlb.c | 2 +- > 5 files changed, 21 insertions(+), 17 deletions(-) > > diff --git a/arch/loongarch/include/asm/cacheflush.h b/arch/loongarch/include/asm/cacheflush.h > index 0681788eb474..7907eb42bfbd 100644 > --- a/arch/loongarch/include/asm/cacheflush.h > +++ b/arch/loongarch/include/asm/cacheflush.h > @@ -47,8 +47,10 @@ void local_flush_icache_range(unsigned long start, unsigned long end); > #define flush_cache_vmap(start, end) do { } while (0) > #define flush_cache_vunmap(start, end) do { } while (0) > #define flush_icache_page(vma, page) do { } while (0) > +#define flush_icache_pages(vma, page) do { } while (0) > #define flush_icache_user_page(vma, page, addr, len) do { } while (0) > #define flush_dcache_page(page) do { } while (0) > +#define flush_dcache_folio(folio) do { } while (0) > #define flush_dcache_mmap_lock(mapping) do { } while (0) > #define flush_dcache_mmap_unlock(mapping) do { } while (0) > > diff --git a/arch/loongarch/include/asm/pgtable-bits.h b/arch/loongarch/include/asm/pgtable-bits.h > index 8b98d22a145b..a1eb2e25446b 100644 > --- a/arch/loongarch/include/asm/pgtable-bits.h > +++ b/arch/loongarch/include/asm/pgtable-bits.h > @@ -48,12 +48,12 @@ > #define _PAGE_NO_EXEC (_ULCAST_(1) << _PAGE_NO_EXEC_SHIFT) > #define _PAGE_RPLV (_ULCAST_(1) << _PAGE_RPLV_SHIFT) > #define _CACHE_MASK (_ULCAST_(3) << _CACHE_SHIFT) > -#define _PFN_SHIFT (PAGE_SHIFT - 12 + _PAGE_PFN_SHIFT) > +#define PFN_PTE_SHIFT (PAGE_SHIFT - 12 + _PAGE_PFN_SHIFT) > > #define _PAGE_USER (PLV_USER << _PAGE_PLV_SHIFT) > #define _PAGE_KERN (PLV_KERN << _PAGE_PLV_SHIFT) > > -#define _PFN_MASK (~((_ULCAST_(1) << (_PFN_SHIFT)) - 1) & \ > +#define _PFN_MASK (~((_ULCAST_(1) << (PFN_PTE_SHIFT)) - 1) & \ > ((_ULCAST_(1) << (_PAGE_PFN_END_SHIFT)) - 1)) > > /* > diff --git a/arch/loongarch/include/asm/pgtable.h b/arch/loongarch/include/asm/pgtable.h > index d28fb9dbec59..13aad0003e9a 100644 > --- a/arch/loongarch/include/asm/pgtable.h > +++ b/arch/loongarch/include/asm/pgtable.h > @@ -237,9 +237,9 @@ extern pmd_t mk_pmd(struct page *page, pgprot_t prot); > extern void set_pmd_at(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp, pmd_t pmd); > > #define pte_page(x) pfn_to_page(pte_pfn(x)) > -#define pte_pfn(x) ((unsigned long)(((x).pte & _PFN_MASK) >> _PFN_SHIFT)) > -#define pfn_pte(pfn, prot) __pte(((pfn) << _PFN_SHIFT) | pgprot_val(prot)) > -#define pfn_pmd(pfn, prot) __pmd(((pfn) << _PFN_SHIFT) | pgprot_val(prot)) > +#define pte_pfn(x) ((unsigned long)(((x).pte & _PFN_MASK) >> PFN_PTE_SHIFT)) > +#define pfn_pte(pfn, prot) __pte(((pfn) << PFN_PTE_SHIFT) | pgprot_val(prot)) > +#define pfn_pmd(pfn, prot) __pmd(((pfn) << PFN_PTE_SHIFT) | pgprot_val(prot)) > > /* > * Initialize a new pgd / pud / pmd table with invalid pointers. > @@ -334,12 +334,6 @@ static inline void set_pte(pte_t *ptep, pte_t pteval) > } > } > > -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, > - pte_t *ptep, pte_t pteval) > -{ > - set_pte(ptep, pteval); > -} > - > static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep) > { > /* Preserve global status for the pair */ > @@ -445,11 +439,19 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) > extern void __update_tlb(struct vm_area_struct *vma, > unsigned long address, pte_t *ptep); > > -static inline void update_mmu_cache(struct vm_area_struct *vma, > - unsigned long address, pte_t *ptep) > +static inline void update_mmu_cache_range(struct vm_area_struct *vma, > + unsigned long address, pte_t *ptep, unsigned int nr) > { > - __update_tlb(vma, address, ptep); > + for (;;) { > + __update_tlb(vma, address, ptep); > + if (--nr == 0) > + break; > + address += PAGE_SIZE; > + ptep++; > + } > } > +#define update_mmu_cache(vma, addr, ptep) \ > + update_mmu_cache_range(vma, addr, ptep, 1) > > #define __HAVE_ARCH_UPDATE_MMU_TLB > #define update_mmu_tlb update_mmu_cache > @@ -462,7 +464,7 @@ static inline void update_mmu_cache_pmd(struct vm_area_struct *vma, > > static inline unsigned long pmd_pfn(pmd_t pmd) > { > - return (pmd_val(pmd) & _PFN_MASK) >> _PFN_SHIFT; > + return (pmd_val(pmd) & _PFN_MASK) >> PFN_PTE_SHIFT; > } > > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > diff --git a/arch/loongarch/mm/pgtable.c b/arch/loongarch/mm/pgtable.c > index 36a6dc0148ae..1260cf30e3ee 100644 > --- a/arch/loongarch/mm/pgtable.c > +++ b/arch/loongarch/mm/pgtable.c > @@ -107,7 +107,7 @@ pmd_t mk_pmd(struct page *page, pgprot_t prot) > { > pmd_t pmd; > > - pmd_val(pmd) = (page_to_pfn(page) << _PFN_SHIFT) | pgprot_val(prot); > + pmd_val(pmd) = (page_to_pfn(page) << PFN_PTE_SHIFT) | pgprot_val(prot); > > return pmd; > } > diff --git a/arch/loongarch/mm/tlb.c b/arch/loongarch/mm/tlb.c > index 8bad6b0cff59..73652930b268 100644 > --- a/arch/loongarch/mm/tlb.c > +++ b/arch/loongarch/mm/tlb.c > @@ -246,7 +246,7 @@ static void output_pgtable_bits_defines(void) > pr_define("_PAGE_WRITE_SHIFT %d\n", _PAGE_WRITE_SHIFT); > pr_define("_PAGE_NO_READ_SHIFT %d\n", _PAGE_NO_READ_SHIFT); > pr_define("_PAGE_NO_EXEC_SHIFT %d\n", _PAGE_NO_EXEC_SHIFT); > - pr_define("_PFN_SHIFT %d\n", _PFN_SHIFT); > + pr_define("PFN_PTE_SHIFT %d\n", PFN_PTE_SHIFT); > pr_debug("\n"); > } > > -- > 2.39.2 > -- Sincerely yours, Mike.