From mboxrd@z Thu Jan 1 00:00:00 1970 From: Avi Kivity Subject: [PATCH v3 3/4] KVM: MMU: Atomically check for accessed bit when dropping an spte Date: Mon, 5 Jul 2010 15:39:12 +0300 Message-ID: <1278333553-22731-4-git-send-email-avi@redhat.com> References: <1278333553-22731-1-git-send-email-avi@redhat.com> Cc: Marcelo Tosatti To: kvm@vger.kernel.org Return-path: Received: from mx1.redhat.com ([209.132.183.28]:4568 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758649Ab0GEMjP (ORCPT ); Mon, 5 Jul 2010 08:39:15 -0400 Received: from int-mx08.intmail.prod.int.phx2.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.21]) by mx1.redhat.com (8.13.8/8.13.8) with ESMTP id o65CdE8B019150 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Mon, 5 Jul 2010 08:39:15 -0400 Received: from cleopatra.tlv.redhat.com (cleopatra.tlv.redhat.com [10.35.255.11]) by int-mx08.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id o65CdEg7026883 for ; Mon, 5 Jul 2010 08:39:14 -0400 In-Reply-To: <1278333553-22731-1-git-send-email-avi@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: Currently, in the window between the check for the accessed bit, and actually dropping the spte, a vcpu can access the page through the spte and set the bit, which will be ignored by the mmu. Fix by using an exchange operation to atmoically fetch the spte and drop it. Signed-off-by: Avi Kivity --- arch/x86/kvm/mmu.c | 28 +++++++++++++++++++++------- 1 files changed, 21 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index e1e6967..b744fbc 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -288,6 +288,21 @@ static void __set_spte(u64 *sptep, u64 spte) #endif } +static u64 __xchg_spte(u64 *sptep, u64 new_spte) +{ +#ifdef CONFIG_X86_64 + return xchg(sptep, new_spte); +#else + u64 old_spte; + + do { + old_spte = *sptep; + } while (cmpxchg64(sptep, old_spte, new_spte) != old_spte); + + return old_spte; +#endif +} + static int mmu_topup_memory_cache(struct kvm_mmu_memory_cache *cache, struct kmem_cache *base_cache, int min) { @@ -653,18 +668,17 @@ static void rmap_remove(struct kvm *kvm, u64 *spte) static void drop_spte(struct kvm *kvm, u64 *sptep, u64 new_spte) { pfn_t pfn; + u64 old_spte; - if (!is_rmap_spte(*sptep)) { - __set_spte(sptep, new_spte); + old_spte = __xchg_spte(sptep, new_spte); + if (!is_rmap_spte(old_spte)) return; - } - pfn = spte_to_pfn(*sptep); - if (*sptep & shadow_accessed_mask) + pfn = spte_to_pfn(old_spte); + if (old_spte & shadow_accessed_mask) kvm_set_pfn_accessed(pfn); - if (is_writable_pte(*sptep)) + if (is_writable_pte(old_spte)) kvm_set_pfn_dirty(pfn); rmap_remove(kvm, sptep); - __set_spte(sptep, new_spte); } static u64 *rmap_next(struct kvm *kvm, unsigned long *rmapp, u64 *spte) -- 1.7.1