From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D5D12EB64DA for ; Wed, 19 Jul 2023 08:36:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:Content-Type: Content-Transfer-Encoding:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:From:References:Cc:To:Subject: MIME-Version:Date:Message-ID:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=KQKI8mteS+gtcBHPdKCex6a1h9Jok8FmuLFssOBukTw=; b=LGMy5dLJ8qAIs7 aY8LyJpHD7Lz9LmeWqfmj1AdF/iihDSc4LdKvaj/g5m1YHxTnoDkzOL+71Q1r6FKyjKUiVdugTe6K ZxlKt8FEzA/3nrkZau3ZtMbkMt5c+rOh/CATaR5plb2+ebLeSE0wJ9PzhJ0qftDm8EY4d5qJUqegN HvrZzbJY5ynFo9W//iOy4eCHSPuAD9c10/DoKEdahWGiRmgQ3Djyuw2dX1EjzSyaJqC1Agdq3ZekI XUsFcSdyR4SM9vNPpe1RIe7IiQFM/kmZ6z85NaqUkAwOifAkDtTIyBjsb4/N2+h4tisfiaJmkIw58 8eBSY9fEMdaYvX8iPppg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qM2fH-006Ohx-0B; Wed, 19 Jul 2023 08:36:07 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qM2fF-006OhH-1x for linux-arm-kernel@bombadil.infradead.org; Wed, 19 Jul 2023 08:36:05 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Transfer-Encoding:Content-Type :In-Reply-To:From:References:Cc:To:Subject:MIME-Version:Date:Message-ID: Sender:Reply-To:Content-ID:Content-Description; bh=RvSmM7MQhdDXRVA2Uryq4YVNwYFzTKNqJe4q9sLr7I8=; b=qp0YBbPA+mEdcyrGs0TDI7dcsA fCpKoH1M4/r6G0ks5kZ4Q52y+Ag1Ir49CCyD5LTdu7+VmZN1kaMXmolTBK5P0NMD5Hd5cgvX83mxV +oOoVPjM5XLRq6Q63IhucIuy/o2eY9M29/+nkBmY6bPgo7rpRi0G+LmOdZqfPnaSW8B8kCXs9LMCu hcy8pthHkZ938pm6oS91Y4NBOxPh2abEry9mi2pz0dCYUdshZQ2YW0jmhhbtnenvJTrNuuuxp1OKy J9ORxX0FdERRNCgzMIpvcVT8AU8WfJ+3vNzd27DyKdW0867i4wxGmGZIiRveJoOB+LE/W6Dt1yrmR xLiY8Edg==; Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by desiato.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qLe7l-00BDMH-1F for linux-arm-kernel@lists.infradead.org; Tue, 18 Jul 2023 06:23:55 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1689661304; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RvSmM7MQhdDXRVA2Uryq4YVNwYFzTKNqJe4q9sLr7I8=; b=EZqHAB3ZRa1Ds7Kv7BhsIFlLPvLR9ZXPoaIKhsSp76/BJyaUDKEI0B164PXZYOc2h/XYR6 +BF7B/+xu5UMaC9xcuQ4qxD0xphkCsk2sEev7d/OcSw4PrjVL6IQvqI6wmJADlM7AMGWr/ I5zm3q2UY44Wph7+VVdWApTF2R+vUqQ= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1689661432; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RvSmM7MQhdDXRVA2Uryq4YVNwYFzTKNqJe4q9sLr7I8=; b=VnOqIp9s6RCzcOr1NOQftqUZOFqmfsinmXwZxMgFMolnKS2e244OIfUyJkB5YI1C/GU33P xHkORra1Z9kGlUGDIxwxs8VAZEWfOph4mhKQZrvLQKcYbs0pjOrpJPPpxHM2c7iD/JRrXZ HQW13IqVsy3ZIfsbj7/VIWHhcuXAzJs= Received: from mail-pl1-f197.google.com (mail-pl1-f197.google.com [209.85.214.197]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-319-my3kOsIRMm6XDUTDvLnMQQ-1; Tue, 18 Jul 2023 02:21:39 -0400 X-MC-Unique: my3kOsIRMm6XDUTDvLnMQQ-1 Received: by mail-pl1-f197.google.com with SMTP id d9443c01a7336-1b86c700cc5so7356435ad.1 for ; Mon, 17 Jul 2023 23:21:38 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689661297; x=1690266097; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=RvSmM7MQhdDXRVA2Uryq4YVNwYFzTKNqJe4q9sLr7I8=; b=RMUs165NjKRAxxe3xc1sB6kE0HCoqoKJPAaA5/s5EvTo6tDGcim4Bww/YCZmT9Zh0l /SGP5aTyAMQHja6b5PW71R9rxt1rQATFUHvcgFS244pMUB6Q/5xfjYF8+IfOJWUFPahO u7qRYm/chI9f5AoSnn1dFcqdm480oUlEhz5QXJKFCoNKiGmDXn8qdGLGJXBrn4GTuHNN QbErM2PFpgFCkdYuVcZw6WkypEQKRw1GiIGzcEW6hXSZ3rAqUHtGKnq4Eak6TqPWAsvH Jzn5YvreBYQridR1XmtlMx7BkgcHdxGi1NSsY7ole3msvBAF3YmI9zibWh5gVKhORg8A Uv9w== X-Gm-Message-State: ABy/qLaz3wdL445n55/wOlEjvYx6h5Dy8cKYYHgG9Mgfa9cykqNN9X2d Xq/H18/ymn5AbaHFczY/vP8HQknrEGpWdBJEeAOPDxSdx0BOS49cj30zpnCmoiTg7aZO9aqfXrP ZfSS3XrtTGNNq4eSaIo1BOOsaRwhps3WXXpQ= X-Received: by 2002:a17:902:da85:b0:1b8:9fc4:2733 with SMTP id j5-20020a170902da8500b001b89fc42733mr10753873plx.3.1689661297386; Mon, 17 Jul 2023 23:21:37 -0700 (PDT) X-Google-Smtp-Source: APBJJlGwK4O80e6YQEGnzMYTYf6ot+mFwRDf85YBN8hCoLDeCfJGnn+lZlhDjDiUb0dFs/RD5ECIfQ== X-Received: by 2002:a17:902:da85:b0:1b8:9fc4:2733 with SMTP id j5-20020a170902da8500b001b89fc42733mr10753835plx.3.1689661296887; Mon, 17 Jul 2023 23:21:36 -0700 (PDT) Received: from [10.66.61.39] ([43.228.180.230]) by smtp.gmail.com with ESMTPSA id 13-20020a170902c20d00b001b392bf9192sm941082pll.145.2023.07.17.23.21.32 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 17 Jul 2023 23:21:36 -0700 (PDT) Message-ID: <8987d68b-d62a-7a9b-3aa3-cd5cc7ad551c@redhat.com> Date: Tue, 18 Jul 2023 14:21:30 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.10.0 Subject: Re: [PATCH v6 05/11] arm64: tlb: Refactor the core flush algorithm of __flush_tlb_range To: Raghavendra Rao Ananta , Oliver Upton , Marc Zyngier , James Morse , Suzuki K Poulose Cc: Paolo Bonzini , Sean Christopherson , Huacai Chen , Zenghui Yu , Anup Patel , Atish Patra , Jing Zhang , Colton Lewis , David Matlack , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Catalin Marinas , Gavin Shan References: <20230715005405.3689586-1-rananta@google.com> <20230715005405.3689586-6-rananta@google.com> From: Shaoqin Huang In-Reply-To: <20230715005405.3689586-6-rananta@google.com> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230718_072353_709629_4DD1F609 X-CRM114-Status: GOOD ( 25.33 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="us-ascii"; Format="flowed" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On 7/15/23 08:53, Raghavendra Rao Ananta wrote: > Currently, the core TLB flush functionality of __flush_tlb_range() > hardcodes vae1is (and variants) for the flush operation. In the > upcoming patches, the KVM code reuses this core algorithm with > ipas2e1is for range based TLB invalidations based on the IPA. > Hence, extract the core flush functionality of __flush_tlb_range() > into its own macro that accepts an 'op' argument to pass any > TLBI operation, such that other callers (KVM) can benefit. > > No functional changes intended. > > Signed-off-by: Raghavendra Rao Ananta > Reviewed-by: Catalin Marinas > Reviewed-by: Gavin Shan Reviewed-by: Shaoqin Huang > --- > arch/arm64/include/asm/tlbflush.h | 109 +++++++++++++++--------------- > 1 file changed, 56 insertions(+), 53 deletions(-) > > diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h > index 412a3b9a3c25..f7fafba25add 100644 > --- a/arch/arm64/include/asm/tlbflush.h > +++ b/arch/arm64/include/asm/tlbflush.h > @@ -278,14 +278,62 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, > */ > #define MAX_TLBI_OPS PTRS_PER_PTE > > +/* When the CPU does not support TLB range operations, flush the TLB > + * entries one by one at the granularity of 'stride'. If the TLB > + * range ops are supported, then: > + * > + * 1. If 'pages' is odd, flush the first page through non-range > + * operations; > + * > + * 2. For remaining pages: the minimum range granularity is decided > + * by 'scale', so multiple range TLBI operations may be required. > + * Start from scale = 0, flush the corresponding number of pages > + * ((num+1)*2^(5*scale+1) starting from 'addr'), then increase it > + * until no pages left. > + * > + * Note that certain ranges can be represented by either num = 31 and > + * scale or num = 0 and scale + 1. The loop below favours the latter > + * since num is limited to 30 by the __TLBI_RANGE_NUM() macro. > + */ > +#define __flush_tlb_range_op(op, start, pages, stride, \ > + asid, tlb_level, tlbi_user) \ > +do { \ > + int num = 0; \ > + int scale = 0; \ > + unsigned long addr; \ > + \ > + while (pages > 0) { \ > + if (!system_supports_tlb_range() || \ > + pages % 2 == 1) { \ > + addr = __TLBI_VADDR(start, asid); \ > + __tlbi_level(op, addr, tlb_level); \ > + if (tlbi_user) \ > + __tlbi_user_level(op, addr, tlb_level); \ > + start += stride; \ > + pages -= stride >> PAGE_SHIFT; \ > + continue; \ > + } \ > + \ > + num = __TLBI_RANGE_NUM(pages, scale); \ > + if (num >= 0) { \ > + addr = __TLBI_VADDR_RANGE(start, asid, scale, \ > + num, tlb_level); \ > + __tlbi(r##op, addr); \ > + if (tlbi_user) \ > + __tlbi_user(r##op, addr); \ > + start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; \ > + pages -= __TLBI_RANGE_PAGES(num, scale); \ > + } \ > + scale++; \ > + } \ > +} while (0) > + > static inline void __flush_tlb_range(struct vm_area_struct *vma, > unsigned long start, unsigned long end, > unsigned long stride, bool last_level, > int tlb_level) > { > - int num = 0; > - int scale = 0; > - unsigned long asid, addr, pages; > + unsigned long asid, pages; > > start = round_down(start, stride); > end = round_up(end, stride); > @@ -307,56 +355,11 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma, > dsb(ishst); > asid = ASID(vma->vm_mm); > > - /* > - * When the CPU does not support TLB range operations, flush the TLB > - * entries one by one at the granularity of 'stride'. If the TLB > - * range ops are supported, then: > - * > - * 1. If 'pages' is odd, flush the first page through non-range > - * operations; > - * > - * 2. For remaining pages: the minimum range granularity is decided > - * by 'scale', so multiple range TLBI operations may be required. > - * Start from scale = 0, flush the corresponding number of pages > - * ((num+1)*2^(5*scale+1) starting from 'addr'), then increase it > - * until no pages left. > - * > - * Note that certain ranges can be represented by either num = 31 and > - * scale or num = 0 and scale + 1. The loop below favours the latter > - * since num is limited to 30 by the __TLBI_RANGE_NUM() macro. > - */ > - while (pages > 0) { > - if (!system_supports_tlb_range() || > - pages % 2 == 1) { > - addr = __TLBI_VADDR(start, asid); > - if (last_level) { > - __tlbi_level(vale1is, addr, tlb_level); > - __tlbi_user_level(vale1is, addr, tlb_level); > - } else { > - __tlbi_level(vae1is, addr, tlb_level); > - __tlbi_user_level(vae1is, addr, tlb_level); > - } > - start += stride; > - pages -= stride >> PAGE_SHIFT; > - continue; > - } > - > - num = __TLBI_RANGE_NUM(pages, scale); > - if (num >= 0) { > - addr = __TLBI_VADDR_RANGE(start, asid, scale, > - num, tlb_level); > - if (last_level) { > - __tlbi(rvale1is, addr); > - __tlbi_user(rvale1is, addr); > - } else { > - __tlbi(rvae1is, addr); > - __tlbi_user(rvae1is, addr); > - } > - start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; > - pages -= __TLBI_RANGE_PAGES(num, scale); > - } > - scale++; > - } > + if (last_level) > + __flush_tlb_range_op(vale1is, start, pages, stride, asid, tlb_level, true); > + else > + __flush_tlb_range_op(vae1is, start, pages, stride, asid, tlb_level, true); > + > dsb(ish); > } > -- Shaoqin _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel