From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DBE03C0015E for ; Thu, 27 Jul 2023 13:01:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231986AbjG0NBn (ORCPT ); Thu, 27 Jul 2023 09:01:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51144 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230466AbjG0NBm (ORCPT ); Thu, 27 Jul 2023 09:01:42 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 80BC11FF3; Thu, 27 Jul 2023 06:01:41 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 1ED6561E73; Thu, 27 Jul 2023 13:01:41 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 79519C433C8; Thu, 27 Jul 2023 13:01:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1690462900; bh=fTbtHKK1LWSfhgfXTFe0mRYDtIWLTxFwG5CNFQRWExE=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=GElMKdALwdZ4HchNwP1G+tH9ejPeU808clyr6GKzYes2dYD+trW0cO9gEhlMRDWRC r8B3HQNGi4HMq7nvM6j9vK8beyMbFY5EOjXWTYl/r/0sg5iGFLfUzFuuQPLFScGfau zm1pH2jzU1dVK3kHCSsdIgJOomGWNVCZwHT+Qs7kfurcMDuok3PvaJnGQx7CrexKdN aDYRqoHbivRfbA39VLOEr8CVnfBYglE3rO8pnDoHhZmFW7rX/78Q1zX25tjEXuFJT1 BcXE/6jlMnEPBq9OcrA1g3Op80T5c1K7ERXxnhOegh7PUM1BSB6hTyoolW1RBT1rbB FyMeC8mFV1Dqg== Received: from [104.132.45.102] (helo=wait-a-minute.misterjones.org) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1qP0cc-00HMRx-18; Thu, 27 Jul 2023 14:01:38 +0100 Date: Thu, 27 Jul 2023 14:01:37 +0100 Message-ID: <87lef1qzim.wl-maz@kernel.org> From: Marc Zyngier To: Raghavendra Rao Ananta Cc: Oliver Upton , James Morse , Suzuki K Poulose , Paolo Bonzini , Sean Christopherson , Huacai Chen , Zenghui Yu , Anup Patel , Atish Patra , Jing Zhang , Reiji Watanabe , Colton Lewis , David Matlack , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Gavin Shan , Shaoqin Huang Subject: Re: [PATCH v7 08/12] KVM: arm64: Define kvm_tlb_flush_vmid_range() In-Reply-To: <87o7jxr06t.wl-maz@kernel.org> References: <20230722022251.3446223-1-rananta@google.com> <20230722022251.3446223-9-rananta@google.com> <87o7jxr06t.wl-maz@kernel.org> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM-LB/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL-LB/10.8 EasyPG/1.0.0 Emacs/28.2 (x86_64-pc-linux-gnu) MULE/6.0 (HANACHIRUSATO) MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") Content-Type: text/plain; charset=US-ASCII X-SA-Exim-Connect-IP: 104.132.45.102 X-SA-Exim-Rcpt-To: rananta@google.com, oliver.upton@linux.dev, james.morse@arm.com, suzuki.poulose@arm.com, pbonzini@redhat.com, seanjc@google.com, chenhuacai@kernel.org, yuzenghui@huawei.com, anup@brainfault.org, atishp@atishpatra.org, jingzhangos@google.com, reijiw@google.com, coltonlewis@google.com, dmatlack@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, gshan@redhat.com, shahuang@redhat.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org On Thu, 27 Jul 2023 13:47:06 +0100, Marc Zyngier wrote: > > On Sat, 22 Jul 2023 03:22:47 +0100, > Raghavendra Rao Ananta wrote: > > > > Implement the helper kvm_tlb_flush_vmid_range() that acts > > as a wrapper for range-based TLB invalidations. For the > > given VMID, use the range-based TLBI instructions to do > > the job or fallback to invalidating all the TLB entries. > > > > Signed-off-by: Raghavendra Rao Ananta > > Reviewed-by: Gavin Shan > > Reviewed-by: Shaoqin Huang > > --- > > arch/arm64/include/asm/kvm_pgtable.h | 10 ++++++++++ > > arch/arm64/kvm/hyp/pgtable.c | 20 ++++++++++++++++++++ > > 2 files changed, 30 insertions(+) > > > > diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h > > index 8294a9a7e566..5e8b1ff07854 100644 > > --- a/arch/arm64/include/asm/kvm_pgtable.h > > +++ b/arch/arm64/include/asm/kvm_pgtable.h > > @@ -754,4 +754,14 @@ enum kvm_pgtable_prot kvm_pgtable_stage2_pte_prot(kvm_pte_t pte); > > * kvm_pgtable_prot format. > > */ > > enum kvm_pgtable_prot kvm_pgtable_hyp_pte_prot(kvm_pte_t pte); > > + > > +/** > > + * kvm_tlb_flush_vmid_range() - Invalidate/flush a range of TLB entries > > + * > > + * @mmu: Stage-2 KVM MMU struct > > + * @addr: The base Intermediate physical address from which to invalidate > > + * @size: Size of the range from the base to invalidate > > + */ > > +void kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, > > + phys_addr_t addr, size_t size); > > #endif /* __ARM64_KVM_PGTABLE_H__ */ > > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c > > index aa740a974e02..5d14d5d5819a 100644 > > --- a/arch/arm64/kvm/hyp/pgtable.c > > +++ b/arch/arm64/kvm/hyp/pgtable.c > > @@ -670,6 +670,26 @@ static bool stage2_has_fwb(struct kvm_pgtable *pgt) > > return !(pgt->flags & KVM_PGTABLE_S2_NOFWB); > > } > > > > +void kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu, > > + phys_addr_t addr, size_t size) > > +{ > > + unsigned long pages, inval_pages; > > + > > + if (!system_supports_tlb_range()) { > > + kvm_call_hyp(__kvm_tlb_flush_vmid, mmu); > > + return; > > + } > > + > > + pages = size >> PAGE_SHIFT; > > + while (pages > 0) { > > + inval_pages = min(pages, MAX_TLBI_RANGE_PAGES); > > + kvm_call_hyp(__kvm_tlb_flush_vmid_range, mmu, addr, inval_pages); > > + > > + addr += inval_pages << PAGE_SHIFT; > > + pages -= inval_pages; > > + } > > +} > > + > > This really shouldn't live in pgtable.c. This code gets linked into > the EL2 object. What do you think happens if, for some reason, this > gets called *from EL2*? Ah, actually, nothing too bad would happen, as we convert the kvm_call_hyp() into a function call. But still, we don't need two copies of this stuff, and it can live in mmu.c. M. -- Without deviation from the norm, progress is not possible.