From mboxrd@z Thu Jan 1 00:00:00 1970 From: Sean Christopherson Date: Fri, 02 Apr 2021 14:59:18 +0000 Subject: Re: [PATCH v2 09/10] KVM: Don't take mmu_lock for range invalidation unless necessary Message-Id: List-Id: References: <20210402005658.3024832-1-seanjc@google.com> <20210402005658.3024832-10-seanjc@google.com> <417bd6b5-b7d0-ed22-adae-02150cdbfebe@redhat.com> In-Reply-To: <417bd6b5-b7d0-ed22-adae-02150cdbfebe@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras , James Morse , Julien Thierry , Suzuki K Poulose , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon On Fri, Apr 02, 2021, Paolo Bonzini wrote: > On 02/04/21 02:56, Sean Christopherson wrote: > > Avoid taking mmu_lock for unrelated .invalidate_range_{start,end}() > > notifications. Because mmu_notifier_count must be modified while holding > > mmu_lock for write, and must always be paired across start->end to stay > > balanced, lock elision must happen in both or none. To meet that > > requirement, add a rwsem to prevent memslot updates across range_start() > > and range_end(). > > > > Use a rwsem instead of a rwlock since most notifiers _allow_ blocking, > > and the lock will be endl across the entire start() ... end() sequence. > > If anything in the sequence sleeps, including the caller or a different > > notifier, holding the spinlock would be disastrous. > > > > For notifiers that _disallow_ blocking, e.g. OOM reaping, simply go down > > the slow path of unconditionally acquiring mmu_lock. The sane > > alternative would be to try to acquire the lock and force the notifier > > to retry on failure. But since OOM is currently the _only_ scenario > > where blocking is disallowed attempting to optimize a guest that has been > > marked for death is pointless. > > > > Unconditionally define and use mmu_notifier_slots_lock in the memslots > > code, purely to avoid more #ifdefs. The overhead of acquiring the lock > > is negligible when the lock is uncontested, which will always be the case > > when the MMU notifiers are not used. > > > > Note, technically flag-only memslot updates could be allowed in parallel, > > but stalling a memslot update for a relatively short amount of time is > > not a scalability issue, and this is all more than complex enough. > > Proposal for the locking documentation: Argh, sorry! Looks great, I owe you. > diff --git a/Documentation/virt/kvm/locking.rst b/Documentation/virt/kvm/locking.rst > index b21a34c34a21..3e4ad7de36cb 100644 > --- a/Documentation/virt/kvm/locking.rst > +++ b/Documentation/virt/kvm/locking.rst > @@ -16,6 +16,13 @@ The acquisition orders for mutexes are as follows: > - kvm->slots_lock is taken outside kvm->irq_lock, though acquiring > them together is quite rare. > +- The kvm->mmu_notifier_slots_lock rwsem ensures that pairs of > + invalidate_range_start() and invalidate_range_end() callbacks > + use the same memslots array. kvm->slots_lock is taken outside the > + write-side critical section of kvm->mmu_notifier_slots_lock, so > + MMU notifiers must not take kvm->slots_lock. No other write-side > + critical sections should be added. > + > On x86, vcpu->mutex is taken outside kvm->arch.hyperv.hv_lock. > Everything else is a leaf: no other lock is taken inside the critical > > Paolo >