From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 18F4B3DBB7; Sat, 13 Apr 2024 09:56:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713002189; cv=none; b=dNznbpcJ5iZs5l+Cik4vKLdHuBLqxBNH4tZzD2C7HJxzN0tQNEBEWwFixIQu79RwnrK9hbVCnB4FaOHpSPe+xr6ZDrgJ52rw+4YSTK30dQeVKfEkLzC4mnjC63qW3ri3FfXgdnwizveCHGthe92JcF2xZ5twOpER3NuQRdLp+VY= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713002189; c=relaxed/simple; bh=QDdFeOtdYZxjp46RxGOVjOsa4iTVpoIZG/FPIUq7PkU=; h=Date:Message-ID:From:To:Cc:Subject:In-Reply-To:References: MIME-Version:Content-Type; b=k4sb9BW3qRClr6agh2lOEc+GAWDMbBtdlx5PmfUjlVAuvD6YKQoFn9CuJfcESE+aYsJLN3ZK4NsGqku47mN4dWdTvbgvPruaSZpu3wOUYr5UG6wVEp43cieghAV2zGliEDhPZPVMKjb00GpIaObQAXTRS8P6zHIHFMSHtylKVp4= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=mFZQ2ioQ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="mFZQ2ioQ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9FB85C4AF55; Sat, 13 Apr 2024 09:56:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1713002188; bh=QDdFeOtdYZxjp46RxGOVjOsa4iTVpoIZG/FPIUq7PkU=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=mFZQ2ioQYI4SPEajzb0CoDJ+DBpEtqV/6Z/BFQ4PKiYU1objwuCL1h0YEV5wVvaaV CStVGbxbg2PLEvUpiruu3vkH/y4cM4+cyaR6uTFrqiuCgSAt4YmdU+kQNO4mhteYE5 q333pUGrVdvWTX2jJozMVr1WfreRxN9CZWgNdxAf4RoHKNGu901fhZ7/hHch++3d1I m2eFAZSX3y2YCFXBDOjGGbQYp+igVsDeuNqTryAXEm0o7f2cHg2zXaMLKr1soTFZ2j ofMp5Ohw4rwS2q0YaHgiHwMQ7+YkitOG5MjuKApjtCEFb8HYoNf/A99GWK9EyWwGw/ 8+Y5NXUY8UrNA== Received: from sofa.misterjones.org ([185.219.108.64] helo=goblin-girl.misterjones.org) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1rva7W-0047Ih-AA; Sat, 13 Apr 2024 10:56:26 +0100 Date: Sat, 13 Apr 2024 10:56:25 +0100 Message-ID: <86h6g5si0m.wl-maz@kernel.org> From: Marc Zyngier To: Sean Christopherson Cc: Will Deacon , Paolo Bonzini , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Oliver Upton , Tianrui Zhao , Bibo Mao , Thomas Bogendoerfer , Nicholas Piggin , Anup Patel , Atish Patra , Andrew Morton , David Hildenbrand , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org Subject: Re: [PATCH 1/4] KVM: delete .change_pte MMU notifier callback In-Reply-To: References: <20240405115815.3226315-1-pbonzini@redhat.com> <20240405115815.3226315-2-pbonzini@redhat.com> <20240412104408.GA27645@willie-the-truck> <86jzl2sovz.wl-maz@kernel.org> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM-LB/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL-LB/10.8 EasyPG/1.0.0 Emacs/29.2 (aarch64-unknown-linux-gnu) MULE/6.0 (HANACHIRUSATO) Precedence: bulk X-Mailing-List: linux-perf-users@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") Content-Type: text/plain; charset=US-ASCII X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: seanjc@google.com, will@kernel.org, pbonzini@redhat.com, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, oliver.upton@linux.dev, zhaotianrui@loongson.cn, maobibo@loongson.cn, tsbogend@alpha.franken.de, npiggin@gmail.com, anup@brainfault.org, atishp@atishpatra.org, akpm@linux-foundation.org, david@redhat.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false On Fri, 12 Apr 2024 15:54:22 +0100, Sean Christopherson wrote: > > On Fri, Apr 12, 2024, Marc Zyngier wrote: > > On Fri, 12 Apr 2024 11:44:09 +0100, Will Deacon wrote: > > > On Fri, Apr 05, 2024 at 07:58:12AM -0400, Paolo Bonzini wrote: > > > Also, if you're in the business of hacking the MMU notifier code, it > > > would be really great to change the .clear_flush_young() callback so > > > that the architecture could handle the TLB invalidation. At the moment, > > > the core KVM code invalidates the whole VMID courtesy of 'flush_on_ret' > > > being set by kvm_handle_hva_range(), whereas we could do a much > > > lighter-weight and targetted TLBI in the architecture page-table code > > > when we actually update the ptes for small ranges. > > > > Indeed, and I was looking at this earlier this week as it has a pretty > > devastating effect with NV (it blows the shadow S2 for that VMID, with > > costly consequences). > > > > In general, it feels like the TLB invalidation should stay with the > > code that deals with the page tables, as it has a pretty good idea of > > what needs to be invalidated and how -- specially on architectures > > that have a HW-broadcast facility like arm64. > > Would this be roughly on par with an in-line flush on arm64? The simpler, more > straightforward solution would be to let architectures override flush_on_ret, > but I would prefer something like the below as x86 can also utilize a range-based > flush when running as a nested hypervisor. > > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > index ff0a20565f90..b65116294efe 100644 > --- a/virt/kvm/kvm_main.c > +++ b/virt/kvm/kvm_main.c > @@ -601,6 +601,7 @@ static __always_inline kvm_mn_ret_t __kvm_handle_hva_range(struct kvm *kvm, > struct kvm_gfn_range gfn_range; > struct kvm_memory_slot *slot; > struct kvm_memslots *slots; > + bool need_flush = false; > int i, idx; > > if (WARN_ON_ONCE(range->end <= range->start)) > @@ -653,10 +654,22 @@ static __always_inline kvm_mn_ret_t __kvm_handle_hva_range(struct kvm *kvm, > break; > } > r.ret |= range->handler(kvm, &gfn_range); > + > + /* > + * Use a precise gfn-based TLB flush when possible, as > + * most mmu_notifier events affect a small-ish range. > + * Fall back to a full TLB flush if the gfn-based flush > + * fails, and don't bother trying the gfn-based flush > + * if a full flush is already pending. > + */ > + if (range->flush_on_ret && !need_flush && r.ret && > + kvm_arch_flush_remote_tlbs_range(kvm, gfn_range.start > + gfn_range.end - gfn_range.start + 1)) > + need_flush = true; > } > } > > - if (range->flush_on_ret && r.ret) > + if (need_flush) > kvm_flush_remote_tlbs(kvm); > > if (r.found_memslot) I think this works for us on HW that has range invalidation, which would already be a positive move. For the lesser HW that isn't range capable, it also gives the opportunity to perform the iteration ourselves or go for the nuclear option if the range is larger than some arbitrary constant (though this is additional work). But this still considers the whole range as being affected by range->handler(). It'd be interesting to try and see whether more precise tracking is (or isn't) generally beneficial. Thanks, M. -- Without deviation from the norm, progress is not possible.