From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 710BAEB64D9 for ; Wed, 5 Jul 2023 00:11:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230242AbjGEALs (ORCPT ); Tue, 4 Jul 2023 20:11:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59792 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230232AbjGEALq (ORCPT ); Tue, 4 Jul 2023 20:11:46 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B6A001B6 for ; Tue, 4 Jul 2023 17:10:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1688515858; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=K+0+bseS4yp1wmND5ghILByl8+eIfcVb0vgT3gSxBqg=; b=OeRAKtG7eRAGagRg4Veknb2cd3I28iwsmJP9AAat7pe60Njr+BrUOAzCYHgZ0dr+OMiG14 bZsNentzPLStQEU9VyeGsa0EJHrjWU06VK6s24zg0Pa0UWUWlwZSMqbwXtNK7bD+u9snXm 81MSToTUR5WuAIief/D9cGjxnaaF6ro= Received: from mail-pf1-f199.google.com (mail-pf1-f199.google.com [209.85.210.199]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-594-3CBuo_KUPwSkhf12riJa6Q-1; Tue, 04 Jul 2023 20:10:57 -0400 X-MC-Unique: 3CBuo_KUPwSkhf12riJa6Q-1 Received: by mail-pf1-f199.google.com with SMTP id d2e1a72fcca58-66870a96b89so6090952b3a.3 for ; Tue, 04 Jul 2023 17:10:57 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1688515856; x=1691107856; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=K+0+bseS4yp1wmND5ghILByl8+eIfcVb0vgT3gSxBqg=; b=PKfV0MPrWiQrKQ4580Ez+hrPOzVnqDRlqgujos2QNsmx/AZEfyH4NcfE60QKoG0s9o CKXoyUdZEnTtm2CZQuPOVFNgeUoRs8dGM7IHdaISgJAAPluKgBbBv1Ybv7E+NilivXnY /1/J4qpdLkFC75I36LWXQOPH/hfC9YNKunFDol5G4LWhHD63WD+d+bcxOSAyoBDvLGh7 EBpy0qw/1Bsmp2NYmAHVI8cUrx1jwh5VrsBD05v1pChOyYowqbVOv9IzC87xe43VVEOk ZkkBvEeDusEHEcN9UYKgZoKAI5DjD/nXk7CDZ2ZLor/n/93KsjupS43Ee8Sbsb06lAoI j8hA== X-Gm-Message-State: AC+VfDyIFa/DlRfl6U4YX19FFQr6xmceI++nWks7Xfdvm4KuRoj97Pxm 1GvpetxREwovxS0AqoIZ+c5ZK2tkZKdHq9PUXcEPKzRA/haGs0WWfroVSBnD79z5nJki+tdaVB+ XAkpBL+YFBmWVqoXdE3eamw== X-Received: by 2002:a05:6a20:4d0:b0:123:c3dc:2052 with SMTP id 16-20020a056a2004d000b00123c3dc2052mr10529456pzd.35.1688515856527; Tue, 04 Jul 2023 17:10:56 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ47QQ3YBvlJAo/rEA2BTL1eagO5kw3RmCxHff4EV0Qs6KlMV0jU3r6pGotpKrKiUIop4cWREg== X-Received: by 2002:a05:6a20:4d0:b0:123:c3dc:2052 with SMTP id 16-20020a056a2004d000b00123c3dc2052mr10529444pzd.35.1688515856198; Tue, 04 Jul 2023 17:10:56 -0700 (PDT) Received: from ?IPV6:2001:8003:e5b0:9f00:dbbc:1945:6e65:ec5? ([2001:8003:e5b0:9f00:dbbc:1945:6e65:ec5]) by smtp.gmail.com with ESMTPSA id x5-20020a170902820500b001a183ade911sm17678801pln.56.2023.07.04.17.10.48 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 04 Jul 2023 17:10:55 -0700 (PDT) Message-ID: <9f9906b5-cdae-273c-06a3-5e100fe2ccd8@redhat.com> Date: Wed, 5 Jul 2023 10:10:45 +1000 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.12.0 Subject: Re: [RESEND PATCH v5 05/11] arm64: tlb: Refactor the core flush algorithm of __flush_tlb_range Content-Language: en-US To: Raghavendra Rao Ananta , Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Sean Christopherson , Huacai Chen , Zenghui Yu , Anup Patel , Atish Patra , Jing Zhang , Colton Lewis , David Matlack , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Catalin Marinas References: <20230621175002.2832640-1-rananta@google.com> <20230621175002.2832640-6-rananta@google.com> From: Gavin Shan In-Reply-To: <20230621175002.2832640-6-rananta@google.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org On 6/22/23 03:49, Raghavendra Rao Ananta wrote: > Currently, the core TLB flush functionality of __flush_tlb_range() > hardcodes vae1is (and variants) for the flush operation. In the > upcoming patches, the KVM code reuses this core algorithm with > ipas2e1is for range based TLB invalidations based on the IPA. > Hence, extract the core flush functionality of __flush_tlb_range() > into its own macro that accepts an 'op' argument to pass any > TLBI operation, such that other callers (KVM) can benefit. > > No functional changes intended. > > Signed-off-by: Raghavendra Rao Ananta > Reviewed-by: Catalin Marinas > --- > arch/arm64/include/asm/tlbflush.h | 108 +++++++++++++++--------------- > 1 file changed, 55 insertions(+), 53 deletions(-) > With the following nits addressed: Reviewed-by: Gavin Shan > diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h > index 412a3b9a3c25d..4775378b6da1b 100644 > --- a/arch/arm64/include/asm/tlbflush.h > +++ b/arch/arm64/include/asm/tlbflush.h > @@ -278,14 +278,61 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, > */ > #define MAX_TLBI_OPS PTRS_PER_PTE > > +/* When the CPU does not support TLB range operations, flush the TLB > + * entries one by one at the granularity of 'stride'. If the TLB > + * range ops are supported, then: > + * > + * 1. If 'pages' is odd, flush the first page through non-range > + * operations; > + * > + * 2. For remaining pages: the minimum range granularity is decided > + * by 'scale', so multiple range TLBI operations may be required. > + * Start from scale = 0, flush the corresponding number of pages > + * ((num+1)*2^(5*scale+1) starting from 'addr'), then increase it > + * until no pages left. > + * > + * Note that certain ranges can be represented by either num = 31 and > + * scale or num = 0 and scale + 1. The loop below favours the latter > + * since num is limited to 30 by the __TLBI_RANGE_NUM() macro. > + */ > +#define __flush_tlb_range_op(op, start, pages, stride, \ > + asid, tlb_level, tlbi_user) do { \ > + int num = 0; \ > + int scale = 0; \ > + unsigned long addr; \ > + \ > + while (pages > 0) { \ > + if (!system_supports_tlb_range() || \ > + pages % 2 == 1) { \ > + addr = __TLBI_VADDR(start, asid); \ > + __tlbi_level(op, addr, tlb_level); \ > + if (tlbi_user) \ > + __tlbi_user_level(op, addr, tlb_level); \ > + start += stride; \ > + pages -= stride >> PAGE_SHIFT; \ > + continue; \ > + } \ > + \ > + num = __TLBI_RANGE_NUM(pages, scale); \ > + if (num >= 0) { \ > + addr = __TLBI_VADDR_RANGE(start, asid, scale, \ > + num, tlb_level); \ > + __tlbi(r##op, addr); \ > + if (tlbi_user) \ > + __tlbi_user(r##op, addr); \ > + start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; \ > + pages -= __TLBI_RANGE_PAGES(num, scale); \ > + } \ > + scale++; \ > + } \ > +} while (0) > + There is a warning reported from 'checkpatch.pl'. WARNING: suspect code indent for conditional statements (32, 8) #52: FILE: arch/arm64/include/asm/tlbflush.h:299: + asid, tlb_level, tlbi_user) do { \ [...] + unsigned long addr; \ total: 0 errors, 1 warnings, 125 lines checked You probably need to tweak it as below, to avoid the warning. #define __flush_tlb_range_op(op, start, pages, stride, \ asid, tlb_level, tlbi_user) \ do { \ > static inline void __flush_tlb_range(struct vm_area_struct *vma, > unsigned long start, unsigned long end, > unsigned long stride, bool last_level, > int tlb_level) > { > - int num = 0; > - int scale = 0; > - unsigned long asid, addr, pages; > + unsigned long asid, pages; > > start = round_down(start, stride); > end = round_up(end, stride); > @@ -307,56 +354,11 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma, > dsb(ishst); > asid = ASID(vma->vm_mm); > > - /* > - * When the CPU does not support TLB range operations, flush the TLB > - * entries one by one at the granularity of 'stride'. If the TLB > - * range ops are supported, then: > - * > - * 1. If 'pages' is odd, flush the first page through non-range > - * operations; > - * > - * 2. For remaining pages: the minimum range granularity is decided > - * by 'scale', so multiple range TLBI operations may be required. > - * Start from scale = 0, flush the corresponding number of pages > - * ((num+1)*2^(5*scale+1) starting from 'addr'), then increase it > - * until no pages left. > - * > - * Note that certain ranges can be represented by either num = 31 and > - * scale or num = 0 and scale + 1. The loop below favours the latter > - * since num is limited to 30 by the __TLBI_RANGE_NUM() macro. > - */ > - while (pages > 0) { > - if (!system_supports_tlb_range() || > - pages % 2 == 1) { > - addr = __TLBI_VADDR(start, asid); > - if (last_level) { > - __tlbi_level(vale1is, addr, tlb_level); > - __tlbi_user_level(vale1is, addr, tlb_level); > - } else { > - __tlbi_level(vae1is, addr, tlb_level); > - __tlbi_user_level(vae1is, addr, tlb_level); > - } > - start += stride; > - pages -= stride >> PAGE_SHIFT; > - continue; > - } > - > - num = __TLBI_RANGE_NUM(pages, scale); > - if (num >= 0) { > - addr = __TLBI_VADDR_RANGE(start, asid, scale, > - num, tlb_level); > - if (last_level) { > - __tlbi(rvale1is, addr); > - __tlbi_user(rvale1is, addr); > - } else { > - __tlbi(rvae1is, addr); > - __tlbi_user(rvae1is, addr); > - } > - start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; > - pages -= __TLBI_RANGE_PAGES(num, scale); > - } > - scale++; > - } > + if (last_level) > + __flush_tlb_range_op(vale1is, start, pages, stride, asid, tlb_level, true); > + else > + __flush_tlb_range_op(vae1is, start, pages, stride, asid, tlb_level, true); > + > dsb(ish); > } > Thanks, Gavin