From: Avi Kivity <avi@redhat.com>
To: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>,
Marcelo Tosatti <mtosatti@redhat.com>,
kvm@vger.kernel.org
Subject: [PATCH v2 6/8] KVM: MMU: Simplify spte fetch() function
Date: Mon, 12 Jul 2010 14:30:52 +0300 [thread overview]
Message-ID: <1278934254-5598-7-git-send-email-avi@redhat.com> (raw)
In-Reply-To: <1278934254-5598-1-git-send-email-avi@redhat.com>
Partition the function into three sections:
- fetching indirect shadow pages (host_level > guest_level)
- fetching direct shadow pages (page_level < host_level <= guest_level)
- the final spte (page_level == host_level)
Instead of the current spaghetti.
A slight change from the original code is that we call validate_direct_spte()
more often: previously we called it only for gw->level, now we also call it for
lower levels. The change should have no effect.
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/paging_tmpl.h | 88 +++++++++++++++++++++++---------------------
1 files changed, 46 insertions(+), 42 deletions(-)
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 1ef4a6a..441f51c 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -327,9 +327,7 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
unsigned access = gw->pt_access;
struct kvm_mmu_page *sp;
u64 *sptep = NULL;
- int direct;
- gfn_t table_gfn;
- int level;
+ int uninitialized_var(level);
bool dirty = is_dirty_gpte(gw->ptes[gw->level - 1]);
unsigned direct_access;
struct kvm_shadow_walk_iterator iterator;
@@ -341,59 +339,65 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
if (!dirty)
direct_access &= ~ACC_WRITE_MASK;
- for_each_shadow_entry(vcpu, addr, iterator) {
+ for (shadow_walk_init(&iterator, vcpu, addr);
+ shadow_walk_okay(&iterator) && iterator.level > gw->level;
+ shadow_walk_next(&iterator)) {
+ gfn_t table_gfn;
+
level = iterator.level;
sptep = iterator.sptep;
- if (iterator.level == hlevel) {
- mmu_set_spte(vcpu, sptep, access,
- gw->pte_access & access,
- user_fault, write_fault,
- dirty, ptwrite, level,
- gw->gfn, pfn, false, true);
- break;
+
+ drop_large_spte(vcpu, sptep);
+
+ if (is_shadow_present_pte(*sptep))
+ continue;
+
+ table_gfn = gw->table_gfn[level - 2];
+ sp = kvm_mmu_get_page(vcpu, table_gfn, addr, level-1,
+ false, access, sptep);
+
+ /*
+ * Verify that the gpte in the page we've just write
+ * protected is still there.
+ */
+ if (!FNAME(validate_indirect_spte)(vcpu, sptep, sp,
+ gw, level - 1)) {
+ kvm_release_pfn_clean(pfn);
+ return NULL;
}
- if (is_shadow_present_pte(*sptep) && !is_large_pte(*sptep)
- && level == gw->level)
- validate_direct_spte(vcpu, sptep, direct_access);
+ link_shadow_page(sptep, sp);
+ }
+
+ for (;
+ shadow_walk_okay(&iterator) && iterator.level > hlevel;
+ shadow_walk_next(&iterator)) {
+ gfn_t direct_gfn;
+
+ level = iterator.level;
+ sptep = iterator.sptep;
drop_large_spte(vcpu, sptep);
if (is_shadow_present_pte(*sptep))
continue;
- if (level <= gw->level) {
- direct = 1;
- access = direct_access;
-
- /*
- * It is a large guest pages backed by small host pages,
- * So we set @direct(@sp->role.direct)=1, and set
- * @table_gfn(@sp->gfn)=the base page frame for linear
- * translations.
- */
- table_gfn = gw->gfn & ~(KVM_PAGES_PER_HPAGE(level) - 1);
- } else {
- direct = 0;
- table_gfn = gw->table_gfn[level - 2];
- }
- sp = kvm_mmu_get_page(vcpu, table_gfn, addr, level-1,
- direct, access, sptep);
- if (!direct)
- /*
- * Verify that the gpte in the page we've just write
- * protected is still there.
- */
- if (!FNAME(validate_indirect_spte)(vcpu, sptep, sp,
- gw, level - 1)) {
- kvm_release_pfn_clean(pfn);
- sptep = NULL;
- break;
- }
+ validate_direct_spte(vcpu, sptep, direct_access);
+ direct_gfn = gw->gfn & ~(KVM_PAGES_PER_HPAGE(level) - 1);
+
+ sp = kvm_mmu_get_page(vcpu, direct_gfn, addr, level-1,
+ true, direct_access, sptep);
link_shadow_page(sptep, sp);
}
+ sptep = iterator.sptep;
+ level = iterator.level;
+
+ mmu_set_spte(vcpu, sptep, access, gw->pte_access & access,
+ user_fault, write_fault, dirty, ptwrite, level,
+ gw->gfn, pfn, false, true);
+
return sptep;
}
--
1.7.1
next prev parent reply other threads:[~2010-07-12 11:31 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-07-12 11:30 [PATCH v2 0/8] Simplify and fix fetch() Avi Kivity
2010-07-12 11:30 ` [PATCH v2 1/8] KVM: MMU: Add link_shadow_page() helper Avi Kivity
2010-07-12 11:30 ` [PATCH v2 2/8] KVM: MMU: Use __set_spte to link shadow pages Avi Kivity
2010-07-12 11:30 ` [PATCH v2 3/8] KVM: MMU: Add drop_large_spte() helper Avi Kivity
2010-07-12 11:30 ` [PATCH v2 4/8] KVM: MMU: Add validate_direct_spte() helper Avi Kivity
2010-07-12 11:30 ` [PATCH v2 5/8] KVM: MMU: Add validate_indirect_spte() helper Avi Kivity
2010-07-12 11:30 ` Avi Kivity [this message]
2010-07-12 11:30 ` [PATCH v2 7/8] KVM: MMU: Validate all gptes during fetch, not just those used for new pages Avi Kivity
2010-07-12 13:06 ` Avi Kivity
2010-07-12 11:30 ` [PATCH v2 8/8] KVM: MMU: Eliminate redundant temporaries in FNAME(fetch) Avi Kivity
2010-07-12 11:42 ` [PATCH v2 0/8] Simplify and fix fetch() Avi Kivity
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1278934254-5598-7-git-send-email-avi@redhat.com \
--to=avi@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=mtosatti@redhat.com \
--cc=xiaoguangrong@cn.fujitsu.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox