From mboxrd@z Thu Jan 1 00:00:00 1970 From: Paolo Bonzini Subject: Re: [PATCH v2] kvm: set page dirty only if page has been writable Date: Wed, 30 Mar 2016 23:08:06 +0200 Message-ID: <56FC4036.1090501@redhat.com> References: <1459370289-15994-1-git-send-email-yuzhao@google.com> Mime-Version: 1.0 Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Cc: x86@kernel.org, kvm@vger.kernel.org To: Yu Zhao , Gleb Natapov , Thomas Gleixner , Ingo Molnar , "H . Peter Anvin" Return-path: Received: from mail-wm0-f67.google.com ([74.125.82.67]:33551 "EHLO mail-wm0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752608AbcC3VIL (ORCPT ); Wed, 30 Mar 2016 17:08:11 -0400 Received: by mail-wm0-f67.google.com with SMTP id i204so16630043wmd.0 for ; Wed, 30 Mar 2016 14:08:10 -0700 (PDT) In-Reply-To: <1459370289-15994-1-git-send-email-yuzhao@google.com> Sender: kvm-owner@vger.kernel.org List-ID: On 30/03/2016 22:38, Yu Zhao wrote: > In absence of shadow dirty mask, there is no need to set page dirty > if page has never been writable. This is a tiny optimization but > good to have for people who care much about dirty page tracking. > > Signed-off-by: Yu Zhao > --- > arch/x86/kvm/mmu.c | 12 ++++++++++-- > 1 file changed, 10 insertions(+), 2 deletions(-) > > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > index 70e95d0..1ff4dbb 100644 > --- a/arch/x86/kvm/mmu.c > +++ b/arch/x86/kvm/mmu.c > @@ -557,8 +557,15 @@ static bool mmu_spte_update(u64 *sptep, u64 new_spte) > !is_writable_pte(new_spte)) > ret = true; > > - if (!shadow_accessed_mask) > + if (!shadow_accessed_mask) { > + /* > + * We don't set page dirty when dropping non-writable spte. > + * So do it now if the new spte is becoming non-writable. > + */ > + if (ret) > + kvm_set_pfn_dirty(spte_to_pfn(old_spte)); > return ret; > + } > > /* > * Flush TLB when accessed/dirty bits are changed in the page tables, > @@ -605,7 +612,8 @@ static int mmu_spte_clear_track_bits(u64 *sptep) > > if (!shadow_accessed_mask || old_spte & shadow_accessed_mask) > kvm_set_pfn_accessed(pfn); > - if (!shadow_dirty_mask || (old_spte & shadow_dirty_mask)) > + if (old_spte & (shadow_dirty_mask ? shadow_dirty_mask : > + PT_WRITABLE_MASK)) > kvm_set_pfn_dirty(pfn); > return 1; > } > Looks good, thanks! Paolo