From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E7E7C00528 for ; Wed, 2 Aug 2023 23:28:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230085AbjHBX2R (ORCPT ); Wed, 2 Aug 2023 19:28:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35948 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229436AbjHBX2R (ORCPT ); Wed, 2 Aug 2023 19:28:17 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 05B04E4F; Wed, 2 Aug 2023 16:28:16 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 8E8EB61B7F; Wed, 2 Aug 2023 23:28:15 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E3031C433C7; Wed, 2 Aug 2023 23:28:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1691018895; bh=nhWlDoBroZnLqL2p7qonO6/FNtiNV5czfDfLuSPQnQM=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=k69LJczbkLd6JK8AoOtoqQZMCYBU6tHbpqBwWU0WMusxaXzNh2LMLKIRGA3YEJNCk HolTMn+Jr1nUMyv9P1fTFE+BDe41EbbHjYz1Tf2MnlqYi1vKwH+V33Vtq/gxibL9Ay STkoZZ37HTMwVV0WTmynxyAdZE8GXt2TOaBvGK8Aoyzm8SHjfD0H0THnOfKKj/LVyx 11fjb87SgX8bdXtX4/YsdVvfZ5Ran3A3/iSHNpthmsu1EnTLPYcvkB2YY7cH0WyNvs sGZYHgDSKMz8rN+P4QzAL2pVU0bdf9mzkU7dTfv6O4hpbg4ICVeaPypRq7fnbVpsIl meFdHsxGyRDHw== Received: from sofa.misterjones.org ([185.219.108.64] helo=goblin-girl.misterjones.org) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1qRLGG-001VrN-Ot; Thu, 03 Aug 2023 00:28:12 +0100 Date: Thu, 03 Aug 2023 00:28:12 +0100 Message-ID: <86fs5158j7.wl-maz@kernel.org> From: Marc Zyngier To: Raghavendra Rao Ananta Cc: Oliver Upton , James Morse , Suzuki K Poulose , Paolo Bonzini , Sean Christopherson , Huacai Chen , Zenghui Yu , Anup Patel , Atish Patra , Jing Zhang , Reiji Watanabe , Colton Lewis , David Matlack , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Subject: Re: [PATCH v7 12/12] KVM: arm64: Use TLBI range-based intructions for unmap In-Reply-To: References: <20230722022251.3446223-1-rananta@google.com> <20230722022251.3446223-13-rananta@google.com> <87jzulqz0v.wl-maz@kernel.org> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM-LB/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL-LB/10.8 EasyPG/1.0.0 Emacs/28.2 (aarch64-unknown-linux-gnu) MULE/6.0 (HANACHIRUSATO) MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: rananta@google.com, oliver.upton@linux.dev, james.morse@arm.com, suzuki.poulose@arm.com, pbonzini@redhat.com, seanjc@google.com, chenhuacai@kernel.org, yuzenghui@huawei.com, anup@brainfault.org, atishp@atishpatra.org, jingzhangos@google.com, reijiw@google.com, coltonlewis@google.com, dmatlack@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org On Mon, 31 Jul 2023 19:26:09 +0100, Raghavendra Rao Ananta wrote: >=20 > On Thu, Jul 27, 2023 at 6:12=E2=80=AFAM Marc Zyngier wro= te: > > > > On Sat, 22 Jul 2023 03:22:51 +0100, > > Raghavendra Rao Ananta wrote: > > > > > > The current implementation of the stage-2 unmap walker traverses > > > the given range and, as a part of break-before-make, performs > > > TLB invalidations with a DSB for every PTE. A multitude of this > > > combination could cause a performance bottleneck on some systems. > > > > > > Hence, if the system supports FEAT_TLBIRANGE, defer the TLB > > > invalidations until the entire walk is finished, and then > > > use range-based instructions to invalidate the TLBs in one go. > > > Condition deferred TLB invalidation on the system supporting FWB, > > > as the optimization is entirely pointless when the unmap walker > > > needs to perform CMOs. > > > > > > Rename stage2_put_pte() to stage2_unmap_put_pte() as the function > > > now serves the stage-2 unmap walker specifically, rather than > > > acting generic. > > > > > > Signed-off-by: Raghavendra Rao Ananta > > > --- > > > arch/arm64/kvm/hyp/pgtable.c | 67 +++++++++++++++++++++++++++++++---= -- > > > 1 file changed, 58 insertions(+), 9 deletions(-) > > > > > > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtabl= e.c > > > index 5ef098af1736..cf88933a2ea0 100644 > > > --- a/arch/arm64/kvm/hyp/pgtable.c > > > +++ b/arch/arm64/kvm/hyp/pgtable.c > > > @@ -831,16 +831,54 @@ static void stage2_make_pte(const struct kvm_pg= table_visit_ctx *ctx, kvm_pte_t n > > > smp_store_release(ctx->ptep, new); > > > } > > > > > > -static void stage2_put_pte(const struct kvm_pgtable_visit_ctx *ctx, = struct kvm_s2_mmu *mmu, > > > - struct kvm_pgtable_mm_ops *mm_ops) > > > +struct stage2_unmap_data { > > > + struct kvm_pgtable *pgt; > > > + bool defer_tlb_flush_init; > > > +}; > > > + > > > +static bool __stage2_unmap_defer_tlb_flush(struct kvm_pgtable *pgt) > > > +{ > > > + /* > > > + * If FEAT_TLBIRANGE is implemented, defer the individual > > > + * TLB invalidations until the entire walk is finished, and > > > + * then use the range-based TLBI instructions to do the > > > + * invalidations. Condition deferred TLB invalidation on the > > > + * system supporting FWB, as the optimization is entirely > > > + * pointless when the unmap walker needs to perform CMOs. > > > + */ > > > + return system_supports_tlb_range() && stage2_has_fwb(pgt); > > > +} > > > + > > > +static bool stage2_unmap_defer_tlb_flush(struct stage2_unmap_data *u= nmap_data) > > > +{ > > > + bool defer_tlb_flush =3D __stage2_unmap_defer_tlb_flush(unmap_d= ata->pgt); > > > + > > > + /* > > > + * Since __stage2_unmap_defer_tlb_flush() is based on alternati= ve > > > + * patching and the TLBIs' operations behavior depend on this, > > > + * track if there's any change in the state during the unmap se= quence. > > > + */ > > > + WARN_ON(unmap_data->defer_tlb_flush_init !=3D defer_tlb_flush); > > > + return defer_tlb_flush; > > > > I really don't understand what you're testing here. The ability to > > defer TLB invalidation is a function of the system capabilities > > (range+FWB) and a single flag that is only set on the host for pKVM. > > > > How could that change in the middle of the life of the system? if > > further begs the question about the need for the unmap_data data > > structure. > > > > It looks to me that we could simply pass the pgt pointer around and be > > done with it. Am I missing something obvious? > > > From one of the previous comments [1] (used in a different context), > I'm given to understand that since these feature checks are governed > by alternative patching, they can potentially change (at runtime?). Is > that not the case and I have misunderstood the idea in comment [1] > entirely? Is it solely used for optimization purposes and set only > once? Alternative patching, just like the static branches used to implement the capability stuff, is a one way street. At the point where KVM is initialised, these configurations are set in stone, and there is no going back. > If that's the case, I can get rid of the WARN_ON() and unmap_data. yes, please. Thanks, M. --=20 Without deviation from the norm, progress is not possible.