From: David Matlack <dmatlack@google.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: Marc Zyngier <maz@kernel.org>,
Huacai Chen <chenhuacai@kernel.org>,
Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
Anup Patel <anup@brainfault.org>,
Paul Walmsley <paul.walmsley@sifive.com>,
Palmer Dabbelt <palmer@dabbelt.com>,
Albert Ou <aou@eecs.berkeley.edu>,
Sean Christopherson <seanjc@google.com>,
Andrew Jones <drjones@redhat.com>,
Ben Gardon <bgardon@google.com>, Peter Xu <peterx@redhat.com>,
maciej.szmigiero@oracle.com,
"moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)"
<kvmarm@lists.cs.columbia.edu>,
"open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)"
<linux-mips@vger.kernel.org>,
"open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)"
<kvm@vger.kernel.org>,
"open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)"
<kvm-riscv@lists.infradead.org>,
Peter Feiner <pfeiner@google.com>,
David Matlack <dmatlack@google.com>
Subject: [PATCH v2 04/26] KVM: x86/mmu: Decompose kvm_mmu_get_page() into separate functions
Date: Fri, 11 Mar 2022 00:25:06 +0000 [thread overview]
Message-ID: <20220311002528.2230172-5-dmatlack@google.com> (raw)
In-Reply-To: <20220311002528.2230172-1-dmatlack@google.com>
Decompose kvm_mmu_get_page() into separate helper functions to increase
readability and prepare for allocating shadow pages without a vcpu
pointer.
Specifically, pull the guts of kvm_mmu_get_page() into 3 helper
functions:
__kvm_mmu_find_shadow_page() -
Walks the page hash checking for any existing mmu pages that match the
given gfn and role. Does not attempt to synchronize the page if it is
unsync.
kvm_mmu_find_shadow_page() -
Wraps __kvm_mmu_find_shadow_page() and handles syncing if necessary.
kvm_mmu_new_shadow_page()
Allocates and initializes an entirely new kvm_mmu_page. This currently
requries a vcpu pointer for allocation and looking up the memslot but
that will be removed in a future commit.
Note, kvm_mmu_new_shadow_page() is temporary and will be removed in a
subsequent commit. The name uses "new" rather than the more typical
"alloc" to avoid clashing with the existing kvm_mmu_alloc_page().
No functional change intended.
Signed-off-by: David Matlack <dmatlack@google.com>
---
arch/x86/kvm/mmu/mmu.c | 132 ++++++++++++++++++++++++---------
arch/x86/kvm/mmu/paging_tmpl.h | 5 +-
arch/x86/kvm/mmu/spte.c | 5 +-
3 files changed, 101 insertions(+), 41 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 23c2004c6435..80dbfe07c87b 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -2027,16 +2027,25 @@ static void clear_sp_write_flooding_count(u64 *spte)
__clear_sp_write_flooding_count(sptep_to_sp(spte));
}
-static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, gfn_t gfn,
- union kvm_mmu_page_role role)
+/*
+ * Searches for an existing SP for the given gfn and role. Makes no attempt to
+ * sync the SP if it is marked unsync.
+ *
+ * If creating an upper-level page table, zaps unsynced pages for the same
+ * gfn and adds them to the invalid_list. It's the callers responsibility
+ * to call kvm_mmu_commit_zap_page() on invalid_list.
+ */
+static struct kvm_mmu_page *__kvm_mmu_find_shadow_page(struct kvm *kvm,
+ gfn_t gfn,
+ union kvm_mmu_page_role role,
+ struct list_head *invalid_list)
{
struct hlist_head *sp_list;
struct kvm_mmu_page *sp;
int collisions = 0;
- LIST_HEAD(invalid_list);
- sp_list = &vcpu->kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)];
- for_each_valid_sp(vcpu->kvm, sp, sp_list) {
+ sp_list = &kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)];
+ for_each_valid_sp(kvm, sp, sp_list) {
if (sp->gfn != gfn) {
collisions++;
continue;
@@ -2053,60 +2062,109 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, gfn_t gfn,
* upper-level page will be write-protected.
*/
if (role.level > PG_LEVEL_4K && sp->unsync)
- kvm_mmu_prepare_zap_page(vcpu->kvm, sp,
- &invalid_list);
+ kvm_mmu_prepare_zap_page(kvm, sp, invalid_list);
+
continue;
}
- /* unsync and write-flooding only apply to indirect SPs. */
- if (sp->role.direct)
- goto trace_get_page;
+ /* Write-flooding is only tracked for indirect SPs. */
+ if (!sp->role.direct)
+ __clear_sp_write_flooding_count(sp);
- if (sp->unsync) {
- /*
- * The page is good, but is stale. kvm_sync_page does
- * get the latest guest state, but (unlike mmu_unsync_children)
- * it doesn't write-protect the page or mark it synchronized!
- * This way the validity of the mapping is ensured, but the
- * overhead of write protection is not incurred until the
- * guest invalidates the TLB mapping. This allows multiple
- * SPs for a single gfn to be unsync.
- *
- * If the sync fails, the page is zapped. If so, break
- * in order to rebuild it.
- */
- if (!kvm_sync_page(vcpu, sp, &invalid_list))
- break;
+ goto out;
+ }
- WARN_ON(!list_empty(&invalid_list));
- kvm_flush_remote_tlbs(vcpu->kvm);
- }
+ sp = NULL;
- __clear_sp_write_flooding_count(sp);
+out:
+ if (collisions > kvm->stat.max_mmu_page_hash_collisions)
+ kvm->stat.max_mmu_page_hash_collisions = collisions;
+
+ return sp;
+}
-trace_get_page:
- trace_kvm_mmu_get_page(sp, false);
+/*
+ * Looks up an existing SP for the given gfn and role if one exists. The
+ * return SP is guaranteed to be synced.
+ */
+static struct kvm_mmu_page *kvm_mmu_find_shadow_page(struct kvm_vcpu *vcpu,
+ gfn_t gfn,
+ union kvm_mmu_page_role role)
+{
+ struct kvm_mmu_page *sp;
+ LIST_HEAD(invalid_list);
+
+ sp = __kvm_mmu_find_shadow_page(vcpu->kvm, gfn, role, &invalid_list);
+ if (!sp)
goto out;
+
+ if (sp->unsync) {
+ /*
+ * The page is good, but is stale. kvm_sync_page does
+ * get the latest guest state, but (unlike mmu_unsync_children)
+ * it doesn't write-protect the page or mark it synchronized!
+ * This way the validity of the mapping is ensured, but the
+ * overhead of write protection is not incurred until the
+ * guest invalidates the TLB mapping. This allows multiple
+ * SPs for a single gfn to be unsync.
+ *
+ * If the sync fails, the page is zapped and added to the
+ * invalid_list.
+ */
+ if (!kvm_sync_page(vcpu, sp, &invalid_list)) {
+ sp = NULL;
+ goto out;
+ }
+
+ WARN_ON(!list_empty(&invalid_list));
+ kvm_flush_remote_tlbs(vcpu->kvm);
}
+out:
+ kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list);
+ return sp;
+}
+
+static struct kvm_mmu_page *kvm_mmu_new_shadow_page(struct kvm_vcpu *vcpu,
+ gfn_t gfn,
+ union kvm_mmu_page_role role)
+{
+ struct kvm_mmu_page *sp;
+ struct hlist_head *sp_list;
+
++vcpu->kvm->stat.mmu_cache_miss;
sp = kvm_mmu_alloc_page(vcpu, role.direct);
-
sp->gfn = gfn;
sp->role = role;
+
+ sp_list = &vcpu->kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)];
hlist_add_head(&sp->hash_link, sp_list);
+
if (!role.direct) {
account_shadowed(vcpu->kvm, sp);
if (role.level == PG_LEVEL_4K && kvm_vcpu_write_protect_gfn(vcpu, gfn))
kvm_flush_remote_tlbs_with_address(vcpu->kvm, gfn, 1);
}
- trace_kvm_mmu_get_page(sp, true);
-out:
- kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list);
- if (collisions > vcpu->kvm->stat.max_mmu_page_hash_collisions)
- vcpu->kvm->stat.max_mmu_page_hash_collisions = collisions;
+ return sp;
+}
+
+static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, gfn_t gfn,
+ union kvm_mmu_page_role role)
+{
+ struct kvm_mmu_page *sp;
+ bool created = false;
+
+ sp = kvm_mmu_find_shadow_page(vcpu, gfn, role);
+ if (sp)
+ goto out;
+
+ created = true;
+ sp = kvm_mmu_new_shadow_page(vcpu, gfn, role);
+
+out:
+ trace_kvm_mmu_get_page(sp, created);
return sp;
}
diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
index c3909a07e938..55cac59b9c9b 100644
--- a/arch/x86/kvm/mmu/paging_tmpl.h
+++ b/arch/x86/kvm/mmu/paging_tmpl.h
@@ -692,8 +692,9 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault,
* the gpte is changed from non-present to present.
* Otherwise, the guest may use the wrong mapping.
*
- * For PG_LEVEL_4K, kvm_mmu_get_page() has already
- * synchronized it transiently via kvm_sync_page().
+ * For PG_LEVEL_4K, kvm_mmu_get_existing_sp() has
+ * already synchronized it transiently via
+ * kvm_sync_page().
*
* For higher level pagetable, we synchronize it via
* the slower mmu_sync_children(). If it needs to
diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
index 4739b53c9734..d10189d9c877 100644
--- a/arch/x86/kvm/mmu/spte.c
+++ b/arch/x86/kvm/mmu/spte.c
@@ -150,8 +150,9 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
/*
* Optimization: for pte sync, if spte was writable the hash
* lookup is unnecessary (and expensive). Write protection
- * is responsibility of kvm_mmu_get_page / kvm_mmu_sync_roots.
- * Same reasoning can be applied to dirty page accounting.
+ * is responsibility of kvm_mmu_create_sp() and
+ * kvm_mmu_sync_roots(). Same reasoning can be applied to dirty
+ * page accounting.
*/
if (is_writable_pte(old_spte))
goto out;
--
2.35.1.723.g4982287a31-goog
next prev parent reply other threads:[~2022-03-11 0:25 UTC|newest]
Thread overview: 67+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-03-11 0:25 [PATCH v2 00/26] Extend Eager Page Splitting to the shadow MMU David Matlack
2022-03-11 0:25 ` [PATCH v2 01/26] KVM: x86/mmu: Optimize MMU page cache lookup for all direct SPs David Matlack
2022-03-15 7:40 ` Peter Xu
2022-03-22 18:16 ` David Matlack
2022-03-11 0:25 ` [PATCH v2 02/26] KVM: x86/mmu: Use a bool for direct David Matlack
2022-03-15 7:46 ` Peter Xu
2022-03-22 18:21 ` David Matlack
2022-03-11 0:25 ` [PATCH v2 03/26] KVM: x86/mmu: Derive shadow MMU page role from parent David Matlack
2022-03-15 8:15 ` Peter Xu
2022-03-22 18:30 ` David Matlack
2022-03-30 14:25 ` Peter Xu
2022-03-11 0:25 ` David Matlack [this message]
2022-03-15 8:50 ` [PATCH v2 04/26] KVM: x86/mmu: Decompose kvm_mmu_get_page() into separate functions Peter Xu
2022-03-22 22:09 ` David Matlack
2022-03-11 0:25 ` [PATCH v2 05/26] KVM: x86/mmu: Rename shadow MMU functions that deal with shadow pages David Matlack
2022-03-15 8:52 ` Peter Xu
2022-03-22 21:35 ` David Matlack
2022-03-30 14:28 ` Peter Xu
2022-03-11 0:25 ` [PATCH v2 06/26] KVM: x86/mmu: Pass memslot to kvm_mmu_new_shadow_page() David Matlack
2022-03-15 9:03 ` Peter Xu
2022-03-22 22:05 ` David Matlack
2022-03-11 0:25 ` [PATCH v2 07/26] KVM: x86/mmu: Separate shadow MMU sp allocation from initialization David Matlack
2022-03-15 9:54 ` Peter Xu
2022-03-11 0:25 ` [PATCH v2 08/26] KVM: x86/mmu: Link spt to sp during allocation David Matlack
2022-03-15 10:04 ` Peter Xu
2022-03-22 22:30 ` David Matlack
2022-03-11 0:25 ` [PATCH v2 09/26] KVM: x86/mmu: Move huge page split sp allocation code to mmu.c David Matlack
2022-03-15 10:17 ` Peter Xu
2022-03-11 0:25 ` [PATCH v2 10/26] KVM: x86/mmu: Use common code to free kvm_mmu_page structs David Matlack
2022-03-15 10:22 ` Peter Xu
2022-03-22 22:33 ` David Matlack
2022-03-11 0:25 ` [PATCH v2 11/26] KVM: x86/mmu: Use common code to allocate kvm_mmu_page structs from vCPU caches David Matlack
2022-03-15 10:27 ` Peter Xu
2022-03-22 22:35 ` David Matlack
2022-03-11 0:25 ` [PATCH v2 12/26] KVM: x86/mmu: Pass const memslot to rmap_add() David Matlack
2022-03-11 0:25 ` [PATCH v2 13/26] KVM: x86/mmu: Pass const memslot to init_shadow_page() and descendants David Matlack
2022-03-11 0:25 ` [PATCH v2 14/26] KVM: x86/mmu: Decouple rmap_add() and link_shadow_page() from kvm_vcpu David Matlack
2022-03-15 10:37 ` Peter Xu
2022-03-11 0:25 ` [PATCH v2 15/26] KVM: x86/mmu: Update page stats in __rmap_add() David Matlack
2022-03-15 10:39 ` Peter Xu
2022-03-11 0:25 ` [PATCH v2 16/26] KVM: x86/mmu: Cache the access bits of shadowed translations David Matlack
2022-03-16 8:32 ` Peter Xu
2022-03-22 22:51 ` David Matlack
2022-03-30 18:30 ` Peter Xu
2022-03-31 21:40 ` David Matlack
2022-03-11 0:25 ` [PATCH v2 17/26] KVM: x86/mmu: Pass access information to make_huge_page_split_spte() David Matlack
2022-03-16 8:44 ` Peter Xu
2022-03-22 23:08 ` David Matlack
2022-03-11 0:25 ` [PATCH v2 18/26] KVM: x86/mmu: Zap collapsible SPTEs at all levels in the shadow MMU David Matlack
2022-03-16 8:49 ` Peter Xu
2022-03-22 23:11 ` David Matlack
2022-03-11 0:25 ` [PATCH v2 19/26] KVM: x86/mmu: Refactor drop_large_spte() David Matlack
2022-03-16 8:53 ` Peter Xu
2022-03-11 0:25 ` [PATCH v2 20/26] KVM: x86/mmu: Extend Eager Page Splitting to the shadow MMU David Matlack
2022-03-16 10:26 ` Peter Xu
2022-03-22 0:07 ` David Matlack
2022-03-22 23:58 ` David Matlack
2022-03-30 18:34 ` Peter Xu
2022-03-31 19:57 ` David Matlack
2022-03-11 0:25 ` [PATCH v2 21/26] KVM: Allow for different capacities in kvm_mmu_memory_cache structs David Matlack
2022-03-19 5:27 ` Anup Patel
2022-03-22 23:13 ` David Matlack
2022-03-11 0:25 ` [PATCH v2 22/26] KVM: Allow GFP flags to be passed when topping up MMU caches David Matlack
2022-03-11 0:25 ` [PATCH v2 23/26] KVM: x86/mmu: Fully split huge pages that require extra pte_list_desc structs David Matlack
2022-03-11 0:25 ` [PATCH v2 24/26] KVM: x86/mmu: Split huge pages aliased by multiple SPTEs David Matlack
2022-03-11 0:25 ` [PATCH v2 25/26] KVM: x86/mmu: Drop NULL pte_list_desc_cache fallback David Matlack
2022-03-11 0:25 ` [PATCH v2 26/26] KVM: selftests: Map x86_64 guest virtual memory with huge pages David Matlack
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220311002528.2230172-5-dmatlack@google.com \
--to=dmatlack@google.com \
--cc=aleksandar.qemu.devel@gmail.com \
--cc=anup@brainfault.org \
--cc=aou@eecs.berkeley.edu \
--cc=bgardon@google.com \
--cc=chenhuacai@kernel.org \
--cc=drjones@redhat.com \
--cc=kvm-riscv@lists.infradead.org \
--cc=kvm@vger.kernel.org \
--cc=kvmarm@lists.cs.columbia.edu \
--cc=linux-mips@vger.kernel.org \
--cc=maciej.szmigiero@oracle.com \
--cc=maz@kernel.org \
--cc=palmer@dabbelt.com \
--cc=paul.walmsley@sifive.com \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=pfeiner@google.com \
--cc=seanjc@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).