From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Date: Sun, 16 Jul 2023 08:11:56 -0700 From: Catalin Marinas Subject: Re: [PATCH v10 4/4] arm64: support batched/deferred tlb shootdown during page reclamation/migration Message-ID: References: <20230710083914.18336-1-yangyicong@huawei.com> <20230710083914.18336-5-yangyicong@huawei.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20230710083914.18336-5-yangyicong@huawei.com> List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+lwn-linux-arm-kernel=archive.lwn.net@lists.infradead.org List-Archive: To: Yicong Yang Cc: akpm@linux-foundation.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, x86@kernel.org, mark.rutland@arm.com, ryan.roberts@arm.com, will@kernel.org, anshuman.khandual@arm.com, linux-doc@vger.kernel.org, corbet@lwn.net, peterz@infradead.org, arnd@arndb.de, punit.agrawal@bytedance.com, linux-kernel@vger.kernel.org, darren@os.amperecomputing.com, yangyicong@hisilicon.com, huzhanyuan@oppo.com, lipeifeng@oppo.com, zhangshiming@oppo.com, guojian@oppo.com, realmz6@gmail.com, linux-mips@vger.kernel.org, openrisc@lists.librecores.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, Barry Song <21cnbao@gmail.com>, wangkefeng.wang@huawei.com, xhao@linux.alibaba.com, prime.zeng@hisilicon.com, Jonathan.Cameron@huawei.com, Barry Song , Nadav Amit , Mel Gorman On Mon, Jul 10, 2023 at 04:39:14PM +0800, Yicong Yang wrote: > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig > index 7856c3a3e35a..f0ce8208c57f 100644 > --- a/arch/arm64/Kconfig > +++ b/arch/arm64/Kconfig > @@ -96,6 +96,7 @@ config ARM64 > select ARCH_SUPPORTS_NUMA_BALANCING > select ARCH_SUPPORTS_PAGE_TABLE_CHECK > select ARCH_SUPPORTS_PER_VMA_LOCK > + select ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH if EXPERT I don't want EXPERT to turn on a feature that's not selectable by the user. This would lead to different performance behaviour based on EXPERT. Just select it unconditionally. > diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h > index 412a3b9a3c25..4bb9cec62e26 100644 > --- a/arch/arm64/include/asm/tlbflush.h > +++ b/arch/arm64/include/asm/tlbflush.h > @@ -254,17 +254,23 @@ static inline void flush_tlb_mm(struct mm_struct *mm) > dsb(ish); > } > > -static inline void flush_tlb_page_nosync(struct vm_area_struct *vma, > - unsigned long uaddr) > +static inline void __flush_tlb_page_nosync(struct mm_struct *mm, > + unsigned long uaddr) > { > unsigned long addr; > > dsb(ishst); > - addr = __TLBI_VADDR(uaddr, ASID(vma->vm_mm)); > + addr = __TLBI_VADDR(uaddr, ASID(mm)); > __tlbi(vale1is, addr); > __tlbi_user(vale1is, addr); > } > > +static inline void flush_tlb_page_nosync(struct vm_area_struct *vma, > + unsigned long uaddr) > +{ > + return __flush_tlb_page_nosync(vma->vm_mm, uaddr); > +} > + > static inline void flush_tlb_page(struct vm_area_struct *vma, > unsigned long uaddr) > { > @@ -272,6 +278,42 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, > dsb(ish); > } > > +#ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH If it's selected unconditionally, we won't need this #ifdef here. > + > +static inline bool arch_tlbbatch_should_defer(struct mm_struct *mm) > +{ > +#ifdef CONFIG_ARM64_WORKAROUND_REPEAT_TLBI > + /* > + * TLB flush deferral is not required on systems, which are affected with "affected by" and drop the comma before "which". -- Catalin _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel