From mboxrd@z Thu Jan 1 00:00:00 1970 From: Avi Kivity Subject: Re: KVM: MMU: optimize set_spte for page sync Date: Sun, 23 Nov 2008 12:36:29 +0200 Message-ID: <4929322D.7050503@redhat.com> References: <20081121184927.GA20607@dmt.cnet> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: kvm-devel To: Marcelo Tosatti Return-path: Received: from mx2.redhat.com ([66.187.237.31]:48845 "EHLO mx2.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755949AbYKWKgd (ORCPT ); Sun, 23 Nov 2008 05:36:33 -0500 Received: from int-mx2.corp.redhat.com (int-mx2.corp.redhat.com [172.16.27.26]) by mx2.redhat.com (8.13.8/8.13.8) with ESMTP id mANAaXrZ003670 for ; Sun, 23 Nov 2008 05:36:33 -0500 In-Reply-To: <20081121184927.GA20607@dmt.cnet> Sender: kvm-owner@vger.kernel.org List-ID: Marcelo Tosatti wrote: > The cost of hash table and memslot lookups are quite significant if the > workload is pagetable write intensive resulting in increased mmu_lock > contention. > > @@ -1593,7 +1593,16 @@ static int set_spte(struct kvm_vcpu *vcp > > spte |= PT_WRITABLE_MASK; > > - if (mmu_need_write_protect(vcpu, gfn, can_unsync)) { > + /* > + * Optimization: for pte sync, if spte was writable the hash > + * lookup is unnecessary (and expensive). Write protection > + * is responsibility of mmu_get_page / kvm_sync_page. > + * Same reasoning can be applied to dirty page accounting. > + */ > + if (sync_page && is_writeble_pte(*shadow_pte)) > + goto set_pte; > What if *shadow_pte points at a different page? Is that possible? -- error compiling committee.c: too many arguments to function