From mboxrd@z Thu Jan 1 00:00:00 1970 From: Junaid Shahid Subject: [PATCH v2 2/5] kvm: x86: mmu: Rename spte_is_locklessly_modifiable() Date: Tue, 8 Nov 2016 15:00:27 -0800 Message-ID: <1478646030-101103-3-git-send-email-junaids@google.com> References: <1478646030-101103-1-git-send-email-junaids@google.com> Cc: andreslc@google.com, pfeiner@google.com, pbonzini@redhat.com, guangrong.xiao@linux.intel.com To: kvm@vger.kernel.org Return-path: Received: from mail-pf0-f171.google.com ([209.85.192.171]:33979 "EHLO mail-pf0-f171.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932255AbcKHXAf (ORCPT ); Tue, 8 Nov 2016 18:00:35 -0500 Received: by mail-pf0-f171.google.com with SMTP id n85so114981674pfi.1 for ; Tue, 08 Nov 2016 15:00:35 -0800 (PST) In-Reply-To: <1478646030-101103-1-git-send-email-junaids@google.com> Sender: kvm-owner@vger.kernel.org List-ID: This change renames spte_is_locklessly_modifiable() to spte_can_locklessly_be_made_writable() to distinguish it from other forms of lockless modifications. The full set of lockless modifications is covered by spte_has_volatile_bits(). Signed-off-by: Junaid Shahid --- arch/x86/kvm/mmu.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index d9c7e98..e580134 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -473,7 +473,7 @@ retry: } #endif -static bool spte_is_locklessly_modifiable(u64 spte) +static bool spte_can_locklessly_be_made_writable(u64 spte) { return (spte & (SPTE_HOST_WRITEABLE | SPTE_MMU_WRITEABLE)) == (SPTE_HOST_WRITEABLE | SPTE_MMU_WRITEABLE); @@ -487,7 +487,7 @@ static bool spte_has_volatile_bits(u64 spte) * also, it can help us to get a stable is_writable_pte() * to ensure tlb flush is not missed. */ - if (spte_is_locklessly_modifiable(spte)) + if (spte_can_locklessly_be_made_writable(spte)) return true; if (!shadow_accessed_mask) @@ -556,7 +556,7 @@ static bool mmu_spte_update(u64 *sptep, u64 new_spte) * we always atomically update it, see the comments in * spte_has_volatile_bits(). */ - if (spte_is_locklessly_modifiable(old_spte) && + if (spte_can_locklessly_be_made_writable(old_spte) && !is_writable_pte(new_spte)) ret = true; @@ -1212,7 +1212,7 @@ static bool spte_write_protect(u64 *sptep, bool pt_protect) u64 spte = *sptep; if (!is_writable_pte(spte) && - !(pt_protect && spte_is_locklessly_modifiable(spte))) + !(pt_protect && spte_can_locklessly_be_made_writable(spte))) return false; rmap_printk("rmap_write_protect: spte %p %llx\n", sptep, *sptep); @@ -2973,7 +2973,7 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gva_t gva, int level, * Currently, to simplify the code, only the spte write-protected * by dirty-log can be fast fixed. */ - if (!spte_is_locklessly_modifiable(spte)) + if (!spte_can_locklessly_be_made_writable(spte)) goto exit; /* -- 2.8.0.rc3.226.g39d4020