From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03601C00140 for ; Wed, 24 Aug 2022 09:43:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236069AbiHXJnH (ORCPT ); Wed, 24 Aug 2022 05:43:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43344 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229640AbiHXJnG (ORCPT ); Wed, 24 Aug 2022 05:43:06 -0400 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5DE3D3A4B1; Wed, 24 Aug 2022 02:43:05 -0700 (PDT) Received: from dggpemm500023.china.huawei.com (unknown [172.30.72.53]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4MCLhM2jttzGpnB; Wed, 24 Aug 2022 17:41:23 +0800 (CST) Received: from dggpemm500001.china.huawei.com (7.185.36.107) by dggpemm500023.china.huawei.com (7.185.36.83) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 24 Aug 2022 17:43:03 +0800 Received: from [10.174.177.243] (10.174.177.243) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 24 Aug 2022 17:43:01 +0800 Message-ID: Date: Wed, 24 Aug 2022 17:43:01 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.9.1 Subject: Re: [PATCH v3 3/4] mm: rmap: Extend tlbbatch APIs to fit new platforms Content-Language: en-US To: Yicong Yang , , , , , , , CC: , , , , , , , , , , , , , , , , Barry Song <21cnbao@gmail.com>, , , , Barry Song , "Thomas Gleixner" , Ingo Molnar , "Borislav Petkov" , Dave Hansen , "H. Peter Anvin" , Nadav Amit , Mel Gorman References: <20220822082120.8347-1-yangyicong@huawei.com> <20220822082120.8347-4-yangyicong@huawei.com> From: Kefeng Wang In-Reply-To: <20220822082120.8347-4-yangyicong@huawei.com> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.177.243] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-s390@vger.kernel.org On 2022/8/22 16:21, Yicong Yang wrote: > From: Barry Song > > Add uaddr to tlbbatch APIs so that platforms like ARM64 are > able to apply this on their specific hardware features. For > ARM64, this could be sending tlbi into hardware queues for > the page with this particular uaddr. > > Cc: Thomas Gleixner > Cc: Ingo Molnar > Cc: Borislav Petkov > Cc: Dave Hansen > Cc: "H. Peter Anvin" > Cc: Nadav Amit > Cc: Mel Gorman > Tested-by: Xin Hao > Signed-off-by: Barry Song > Signed-off-by: Yicong Yang Reviewed-by: Kefeng Wang > --- > arch/x86/include/asm/tlbflush.h | 3 ++- > mm/rmap.c | 10 ++++++---- > 2 files changed, 8 insertions(+), 5 deletions(-) > > diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h > index 8a497d902c16..5bd78ae55cd4 100644 > --- a/arch/x86/include/asm/tlbflush.h > +++ b/arch/x86/include/asm/tlbflush.h > @@ -264,7 +264,8 @@ static inline u64 inc_mm_tlb_gen(struct mm_struct *mm) > } > > static inline void arch_tlbbatch_add_mm(struct arch_tlbflush_unmap_batch *batch, > - struct mm_struct *mm) > + struct mm_struct *mm, > + unsigned long uaddr) > { > inc_mm_tlb_gen(mm); > cpumask_or(&batch->cpumask, &batch->cpumask, mm_cpumask(mm)); > diff --git a/mm/rmap.c b/mm/rmap.c > index a17a004550c6..7187a72b63b1 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -642,12 +642,13 @@ void try_to_unmap_flush_dirty(void) > #define TLB_FLUSH_BATCH_PENDING_LARGE \ > (TLB_FLUSH_BATCH_PENDING_MASK / 2) > > -static void set_tlb_ubc_flush_pending(struct mm_struct *mm, bool writable) > +static void set_tlb_ubc_flush_pending(struct mm_struct *mm, bool writable, > + unsigned long uaddr) > { > struct tlbflush_unmap_batch *tlb_ubc = ¤t->tlb_ubc; > int batch, nbatch; > > - arch_tlbbatch_add_mm(&tlb_ubc->arch, mm); > + arch_tlbbatch_add_mm(&tlb_ubc->arch, mm, uaddr); > tlb_ubc->flush_required = true; > > /* > @@ -725,7 +726,8 @@ void flush_tlb_batched_pending(struct mm_struct *mm) > } > } > #else > -static void set_tlb_ubc_flush_pending(struct mm_struct *mm, bool writable) > +static void set_tlb_ubc_flush_pending(struct mm_struct *mm, bool writable, > + unsigned long uaddr) > { > } > > @@ -1587,7 +1589,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, > */ > pteval = ptep_get_and_clear(mm, address, pvmw.pte); > > - set_tlb_ubc_flush_pending(mm, pte_dirty(pteval)); > + set_tlb_ubc_flush_pending(mm, pte_dirty(pteval), address); > } else { > pteval = ptep_clear_flush(vma, address, pvmw.pte); > }