From mboxrd@z Thu Jan 1 00:00:00 1970 From: Marcelo Tosatti Subject: Re: KVM: x86: move vapic page handling out of fast path Date: Mon, 23 Jun 2008 12:04:25 -0300 Message-ID: <20080623150425.GA30975@dmt.cnet> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: kvm-devel To: Avi Kivity Return-path: Received: from mx1.redhat.com ([66.187.233.31]:34276 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755044AbYFWPEn (ORCPT ); Mon, 23 Jun 2008 11:04:43 -0400 Content-Disposition: inline Sender: kvm-owner@vger.kernel.org List-ID: On Mon, Jun 23, 2008 at 05:29:34AM +0300, Avi Kivity wrote: > The page can't be swapped out since its reference count is elevated > indefinitely. AFAICT the page can be swapped out when the guest has exited to userspace and the only reference is the qemu userspace mapping. >> So, what do you have against this patch ? >> > > We need to move away from reference counts, they make kvm brittle. The > patch improves the current state of things (since pages are pinned > indefinitely anyway now) but takes the wrong direction for the future. > > Note that kvmclock has the same issue, so we might as well share the > solution: > > struct kvm_fast_guest_page { > gfn_t gfn; > struct page *page; > spinlock_t lock; > struct list_head link; > } > > The mmu notifier callbacks can scan this list and null any pages that > match the gfn being cleared, but in the normal cast, ->page can be > accessed directly. OK, can you please apply this at least: KVM: move slots_lock acquision down to vapic_exit There is no need to grab slots_lock if the vapic_page will not be touched. Signed-off-by: Marcelo Tosatti diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 26b051b..29e8983 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -2759,8 +2759,10 @@ static void vapic_exit(struct kvm_vcpu *vcpu) if (!apic || !apic->vapic_addr) return; + down_read(&vcpu->kvm->slots_lock); kvm_release_page_dirty(apic->vapic_page); mark_page_dirty(vcpu->kvm, apic->vapic_addr >> PAGE_SHIFT); + up_read(&vcpu->kvm->slots_lock); } static int __vcpu_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run) @@ -2916,9 +2918,7 @@ out: post_kvm_run_save(vcpu, kvm_run); - down_read(&vcpu->kvm->slots_lock); vapic_exit(vcpu); - up_read(&vcpu->kvm->slots_lock); return r; }