From: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
To: unlisted-recipients:; (no To-header on input)
Cc: Avi Kivity <avi@redhat.com>,
Marcelo Tosatti <mtosatti@redhat.com>,
KVM list <kvm@vger.kernel.org>,
LKML <linux-kernel@vger.kernel.org>
Subject: [PATCH v4 5/9] KVM MMU: rename 'root_count' to 'active_count'
Date: Thu, 06 May 2010 17:31:11 +0800 [thread overview]
Message-ID: <4BE28C5F.7020104@cn.fujitsu.com> (raw)
In-Reply-To: <4BE2818A.5000301@cn.fujitsu.com>
Rename 'root_count' to 'active_count' in kvm_mmu_page, since the unsync pages
also will use it in later patch
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
---
arch/x86/include/asm/kvm_host.h | 7 ++++++-
arch/x86/kvm/mmu.c | 14 +++++++-------
arch/x86/kvm/mmutrace.h | 6 +++---
3 files changed, 16 insertions(+), 11 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index ed48904..86a8550 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -203,7 +203,12 @@ struct kvm_mmu_page {
DECLARE_BITMAP(slot_bitmap, KVM_MEMORY_SLOTS + KVM_PRIVATE_MEM_SLOTS);
bool multimapped; /* More than one parent_pte? */
bool unsync;
- int root_count; /* Currently serving as active root */
+ /*
+ * if active_count > 0, it means that this page is not freed
+ * immediately, it's used by active root and unsync pages which
+ * out of kvm->mmu_lock's protection currently.
+ */
+ int active_count;
unsigned int unsync_children;
union {
u64 *parent_pte; /* !multimapped */
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 26edc11..58cf0f1 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1539,7 +1539,7 @@ static int kvm_mmu_zap_page(struct kvm *kvm, struct kvm_mmu_page *sp)
unaccount_shadowed(kvm, sp->gfn);
if (sp->unsync)
kvm_unlink_unsync_page(kvm, sp);
- if (!sp->root_count) {
+ if (!sp->active_count) {
hlist_del(&sp->hash_link);
kvm_mmu_free_page(kvm, sp);
} else {
@@ -2060,8 +2060,8 @@ static void mmu_free_roots(struct kvm_vcpu *vcpu)
hpa_t root = vcpu->arch.mmu.root_hpa;
sp = page_header(root);
- --sp->root_count;
- if (!sp->root_count && sp->role.invalid)
+ --sp->active_count;
+ if (!sp->active_count && sp->role.invalid)
kvm_mmu_zap_page(vcpu->kvm, sp);
vcpu->arch.mmu.root_hpa = INVALID_PAGE;
spin_unlock(&vcpu->kvm->mmu_lock);
@@ -2073,8 +2073,8 @@ static void mmu_free_roots(struct kvm_vcpu *vcpu)
if (root) {
root &= PT64_BASE_ADDR_MASK;
sp = page_header(root);
- --sp->root_count;
- if (!sp->root_count && sp->role.invalid)
+ --sp->active_count;
+ if (!sp->active_count && sp->role.invalid)
kvm_mmu_zap_page(vcpu->kvm, sp);
}
vcpu->arch.mmu.pae_root[i] = INVALID_PAGE;
@@ -2120,7 +2120,7 @@ static int mmu_alloc_roots(struct kvm_vcpu *vcpu)
PT64_ROOT_LEVEL, direct,
ACC_ALL, NULL);
root = __pa(sp->spt);
- ++sp->root_count;
+ ++sp->active_count;
spin_unlock(&vcpu->kvm->mmu_lock);
vcpu->arch.mmu.root_hpa = root;
return 0;
@@ -2150,7 +2150,7 @@ static int mmu_alloc_roots(struct kvm_vcpu *vcpu)
PT32_ROOT_LEVEL, direct,
ACC_ALL, NULL);
root = __pa(sp->spt);
- ++sp->root_count;
+ ++sp->active_count;
spin_unlock(&vcpu->kvm->mmu_lock);
vcpu->arch.mmu.pae_root[i] = root | PT_PRESENT_MASK;
diff --git a/arch/x86/kvm/mmutrace.h b/arch/x86/kvm/mmutrace.h
index 42f07b1..8c8d265 100644
--- a/arch/x86/kvm/mmutrace.h
+++ b/arch/x86/kvm/mmutrace.h
@@ -10,13 +10,13 @@
#define KVM_MMU_PAGE_FIELDS \
__field(__u64, gfn) \
__field(__u32, role) \
- __field(__u32, root_count) \
+ __field(__u32, active_count) \
__field(bool, unsync)
#define KVM_MMU_PAGE_ASSIGN(sp) \
__entry->gfn = sp->gfn; \
__entry->role = sp->role.word; \
- __entry->root_count = sp->root_count; \
+ __entry->active_count = sp->active_count; \
__entry->unsync = sp->unsync;
#define KVM_MMU_PAGE_PRINTK() ({ \
@@ -37,7 +37,7 @@
access_str[role.access], \
role.invalid ? " invalid" : "", \
role.nxe ? "" : "!", \
- __entry->root_count, \
+ __entry->active_count, \
__entry->unsync ? "unsync" : "sync", 0); \
ret; \
})
--
1.6.1.2
next prev parent reply other threads:[~2010-05-06 9:31 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <4BE2818A.5000301@cn.fujitsu.com>
2010-05-06 9:30 ` [PATCH v4 1/9] KVM MMU: split kvm_sync_page() function Xiao Guangrong
2010-05-06 9:30 ` [PATCH v4 2/9] KVM MMU: don't write-protect if have new mapping to unsync page Xiao Guangrong
2010-05-06 9:30 ` [PATCH v4 3/9] KVM MMU: allow more page become unsync at gfn mapping time Xiao Guangrong
2010-05-06 9:30 ` [PATCH v4 4/9] KVM MMU: allow more page become unsync at getting sp time Xiao Guangrong
2010-05-06 9:31 ` Xiao Guangrong [this message]
2010-05-07 3:57 ` [PATCH v5 5/9] KVM MMU: rename 'root_count' to 'active_count' Xiao Guangrong
2010-05-06 9:31 ` [PATCH v4 6/9] KVM MMU: support keeping sp live while it's out of protection Xiao Guangrong
2010-05-07 3:58 ` [PATCH v5 " Xiao Guangrong
2010-05-06 9:31 ` [PATCH v4 7/9] KVM MMU: separate invlpg code form kvm_mmu_pte_write() Xiao Guangrong
2010-05-06 9:31 ` [PATCH v4 8/9] KVM MMU: no need atomic operation for 'invlpg_counter' Xiao Guangrong
2010-05-06 9:31 ` [PATCH v4 9/9] KVM MMU: optimize sync/update unsync-page Xiao Guangrong
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4BE28C5F.7020104@cn.fujitsu.com \
--to=xiaoguangrong@cn.fujitsu.com \
--cc=avi@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mtosatti@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).