From mboxrd@z Thu Jan 1 00:00:00 1970 From: Xiao Guangrong Subject: Re: [PATCH 1/3] KVM: MMU: Clean up set_spte()'s ACC_WRITE_MASK handling Date: Thu, 09 May 2013 20:16:18 +0800 Message-ID: <518B9392.7070405@linux.vnet.ibm.com> References: <20130509154350.15b956c4.yoshikawa_takuya_b1@lab.ntt.co.jp> <20130509154433.d8b62a0f.yoshikawa_takuya_b1@lab.ntt.co.jp> <518B7797.6040509@linux.vnet.ibm.com> <20130509111848.GF32023@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: Takuya Yoshikawa , pbonzini@redhat.com, mtosatti@redhat.com, kvm@vger.kernel.org To: Gleb Natapov Return-path: Received: from e28smtp03.in.ibm.com ([122.248.162.3]:50145 "EHLO e28smtp03.in.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751327Ab3EIMQi (ORCPT ); Thu, 9 May 2013 08:16:38 -0400 Received: from /spool/local by e28smtp03.in.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Thu, 9 May 2013 17:41:51 +0530 Received: from d28relay01.in.ibm.com (d28relay01.in.ibm.com [9.184.220.58]) by d28dlp02.in.ibm.com (Postfix) with ESMTP id 98C37394005B for ; Thu, 9 May 2013 17:46:22 +0530 (IST) Received: from d28av02.in.ibm.com (d28av02.in.ibm.com [9.184.220.64]) by d28relay01.in.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id r49CGELa2359806 for ; Thu, 9 May 2013 17:46:15 +0530 Received: from d28av02.in.ibm.com (loopback [127.0.0.1]) by d28av02.in.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id r49CGKQU016375 for ; Thu, 9 May 2013 22:16:20 +1000 In-Reply-To: <20130509111848.GF32023@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: On 05/09/2013 07:18 PM, Gleb Natapov wrote: > On Thu, May 09, 2013 at 06:16:55PM +0800, Xiao Guangrong wrote: >> On 05/09/2013 02:44 PM, Takuya Yoshikawa wrote: >>> Rather than clearing the ACC_WRITE_MASK bit of pte_access in the >>> "if (mmu_need_write_protect())" block not to call mark_page_dirty() in >>> the following if statement, simply moving the call into the appropriate >>> else block is better. >>> >>> Signed-off-by: Takuya Yoshikawa >>> --- >>> arch/x86/kvm/mmu.c | 7 ++----- >>> 1 files changed, 2 insertions(+), 5 deletions(-) >>> >>> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c >>> index 004cc87..08119a8 100644 >>> --- a/arch/x86/kvm/mmu.c >>> +++ b/arch/x86/kvm/mmu.c >>> @@ -2386,14 +2386,11 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep, >>> pgprintk("%s: found shadow page for %llx, marking ro\n", >>> __func__, gfn); >>> ret = 1; >>> - pte_access &= ~ACC_WRITE_MASK; >>> spte &= ~(PT_WRITABLE_MASK | SPTE_MMU_WRITEABLE); >>> - } >>> + } else >>> + mark_page_dirty(vcpu->kvm, gfn); >>> } >>> >>> - if (pte_access & ACC_WRITE_MASK) >>> - mark_page_dirty(vcpu->kvm, gfn); >>> - >>> set_pte: >>> if (mmu_spte_update(sptep, spte)) >>> kvm_flush_remote_tlbs(vcpu->kvm); >> >> That function is really magic, and this change do no really help it. I had several >> patches posted some months ago to make these kind of code better understanding, but >> i am too tired to update them. > Can you point me to them? Your work is really appreciated, I am sorry There are two patches about this set_spte cleanups: https://lkml.org/lkml/2013/1/23/125 https://lkml.org/lkml/2013/1/23/138 > you feel this way. It is not your fault, it is mine. Will update these patches when i finish the zap-all-page and zap-mmio-sp related things.