From mboxrd@z Thu Jan 1 00:00:00 1970 From: Xiao Guangrong Subject: Re: [PATCH 1/3] KVM: MMU: Clean up set_spte()'s ACC_WRITE_MASK handling Date: Thu, 09 May 2013 18:16:55 +0800 Message-ID: <518B7797.6040509@linux.vnet.ibm.com> References: <20130509154350.15b956c4.yoshikawa_takuya_b1@lab.ntt.co.jp> <20130509154433.d8b62a0f.yoshikawa_takuya_b1@lab.ntt.co.jp> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: gleb@redhat.com, pbonzini@redhat.com, mtosatti@redhat.com, kvm@vger.kernel.org To: Takuya Yoshikawa Return-path: Received: from e28smtp05.in.ibm.com ([122.248.162.5]:44473 "EHLO e28smtp05.in.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752391Ab3EIKSl (ORCPT ); Thu, 9 May 2013 06:18:41 -0400 Received: from /spool/local by e28smtp05.in.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Thu, 9 May 2013 15:44:42 +0530 Received: from d28relay02.in.ibm.com (d28relay02.in.ibm.com [9.184.220.59]) by d28dlp02.in.ibm.com (Postfix) with ESMTP id 66F063940058 for ; Thu, 9 May 2013 15:48:34 +0530 (IST) Received: from d28av02.in.ibm.com (d28av02.in.ibm.com [9.184.220.64]) by d28relay02.in.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id r49AI5gf47775768 for ; Thu, 9 May 2013 15:48:06 +0530 Received: from d28av02.in.ibm.com (loopback [127.0.0.1]) by d28av02.in.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id r49AI7hk012693 for ; Thu, 9 May 2013 20:18:09 +1000 In-Reply-To: <20130509154433.d8b62a0f.yoshikawa_takuya_b1@lab.ntt.co.jp> Sender: kvm-owner@vger.kernel.org List-ID: On 05/09/2013 02:44 PM, Takuya Yoshikawa wrote: > Rather than clearing the ACC_WRITE_MASK bit of pte_access in the > "if (mmu_need_write_protect())" block not to call mark_page_dirty() in > the following if statement, simply moving the call into the appropriate > else block is better. > > Signed-off-by: Takuya Yoshikawa > --- > arch/x86/kvm/mmu.c | 7 ++----- > 1 files changed, 2 insertions(+), 5 deletions(-) > > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > index 004cc87..08119a8 100644 > --- a/arch/x86/kvm/mmu.c > +++ b/arch/x86/kvm/mmu.c > @@ -2386,14 +2386,11 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep, > pgprintk("%s: found shadow page for %llx, marking ro\n", > __func__, gfn); > ret = 1; > - pte_access &= ~ACC_WRITE_MASK; > spte &= ~(PT_WRITABLE_MASK | SPTE_MMU_WRITEABLE); > - } > + } else > + mark_page_dirty(vcpu->kvm, gfn); > } > > - if (pte_access & ACC_WRITE_MASK) > - mark_page_dirty(vcpu->kvm, gfn); > - > set_pte: > if (mmu_spte_update(sptep, spte)) > kvm_flush_remote_tlbs(vcpu->kvm); That function is really magic, and this change do no really help it. I had several patches posted some months ago to make these kind of code better understanding, but i am too tired to update them.