From: David Matlack <dmatlack@google.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: Marc Zyngier <maz@kernel.org>,
Huacai Chen <chenhuacai@kernel.org>,
leksandar Markovic <aleksandar.qemu.devel@gmail.com>,
Sean Christopherson <seanjc@google.com>,
Vitaly Kuznetsov <vkuznets@redhat.com>,
Peter Xu <peterx@redhat.com>, Wanpeng Li <wanpengli@tencent.com>,
Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
Peter Feiner <pfeiner@google.com>,
Andrew Jones <drjones@redhat.com>,
maciej.szmigiero@oracle.com, kvm@vger.kernel.org,
David Matlack <dmatlack@google.com>
Subject: [PATCH 05/23] KVM: x86/mmu: Pass memslot to kvm_mmu_create_sp()
Date: Thu, 3 Feb 2022 01:00:33 +0000 [thread overview]
Message-ID: <20220203010051.2813563-6-dmatlack@google.com> (raw)
In-Reply-To: <20220203010051.2813563-1-dmatlack@google.com>
Passing the memslot to kvm_mmu_create_sp() avoids the need for the vCPU
pointer when write-protecting indirect 4k shadow pages. This moves us
closer to being able to create new shadow pages during VM ioctls for
eager page splitting, where there is not vCPU pointer.
This change does not negatively impact "Populate memory time" for ept=Y
or ept=N configurations since kvm_vcpu_gfn_to_memslot() caches the last
use slot. So even though we now look up the slot more often, it is a
very cheap check.
Opportunistically move the code to write-protect GFNs shadowed by
PG_LEVEL_4K shadow pages into account_shadowed() to reduce indentation
and consolidate the code. This also eliminates a memslot lookup.
No functional change intended.
Signed-off-by: David Matlack <dmatlack@google.com>
---
arch/x86/kvm/mmu/mmu.c | 23 ++++++++++++-----------
1 file changed, 12 insertions(+), 11 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 6f55af9c66db..49f82addf4b5 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -804,16 +804,14 @@ void kvm_mmu_gfn_allow_lpage(const struct kvm_memory_slot *slot, gfn_t gfn)
update_gfn_disallow_lpage_count(slot, gfn, -1);
}
-static void account_shadowed(struct kvm *kvm, struct kvm_mmu_page *sp)
+static void account_shadowed(struct kvm *kvm,
+ struct kvm_memory_slot *slot,
+ struct kvm_mmu_page *sp)
{
- struct kvm_memslots *slots;
- struct kvm_memory_slot *slot;
gfn_t gfn;
kvm->arch.indirect_shadow_pages++;
gfn = sp->gfn;
- slots = kvm_memslots_for_spte_role(kvm, sp->role);
- slot = __gfn_to_memslot(slots, gfn);
/* the non-leaf shadow pages are keeping readonly. */
if (sp->role.level > PG_LEVEL_4K)
@@ -821,6 +819,9 @@ static void account_shadowed(struct kvm *kvm, struct kvm_mmu_page *sp)
KVM_PAGE_TRACK_WRITE);
kvm_mmu_gfn_disallow_lpage(slot, gfn);
+
+ if (kvm_mmu_slot_gfn_write_protect(kvm, slot, gfn, PG_LEVEL_4K))
+ kvm_flush_remote_tlbs_with_address(kvm, gfn, 1);
}
void account_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp)
@@ -2144,6 +2145,7 @@ static struct kvm_mmu_page *kvm_mmu_get_existing_sp(struct kvm_vcpu *vcpu,
}
static struct kvm_mmu_page *kvm_mmu_create_sp(struct kvm_vcpu *vcpu,
+ struct kvm_memory_slot *slot,
gfn_t gfn,
union kvm_mmu_page_role role)
{
@@ -2159,11 +2161,8 @@ static struct kvm_mmu_page *kvm_mmu_create_sp(struct kvm_vcpu *vcpu,
sp_list = &vcpu->kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)];
hlist_add_head(&sp->hash_link, sp_list);
- if (!role.direct) {
- account_shadowed(vcpu->kvm, sp);
- if (role.level == PG_LEVEL_4K && kvm_vcpu_write_protect_gfn(vcpu, gfn))
- kvm_flush_remote_tlbs_with_address(vcpu->kvm, gfn, 1);
- }
+ if (!role.direct)
+ account_shadowed(vcpu->kvm, slot, sp);
return sp;
}
@@ -2171,6 +2170,7 @@ static struct kvm_mmu_page *kvm_mmu_create_sp(struct kvm_vcpu *vcpu,
static struct kvm_mmu_page *kvm_mmu_get_sp(struct kvm_vcpu *vcpu, gfn_t gfn,
union kvm_mmu_page_role role)
{
+ struct kvm_memory_slot *slot;
struct kvm_mmu_page *sp;
bool created = false;
@@ -2179,7 +2179,8 @@ static struct kvm_mmu_page *kvm_mmu_get_sp(struct kvm_vcpu *vcpu, gfn_t gfn,
goto out;
created = true;
- sp = kvm_mmu_create_sp(vcpu, gfn, role);
+ slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn);
+ sp = kvm_mmu_create_sp(vcpu, slot, gfn, role);
out:
trace_kvm_mmu_get_page(sp, created);
--
2.35.0.rc2.247.g8bbb082509-goog
next prev parent reply other threads:[~2022-02-03 1:01 UTC|newest]
Thread overview: 65+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-02-03 1:00 [PATCH 00/23] Extend Eager Page Splitting to the shadow MMU David Matlack
2022-02-03 1:00 ` [PATCH 01/23] KVM: x86/mmu: Optimize MMU page cache lookup for all direct SPs David Matlack
2022-02-19 0:57 ` Sean Christopherson
2022-02-03 1:00 ` [PATCH 02/23] KVM: x86/mmu: Derive shadow MMU page role from parent David Matlack
2022-02-19 1:14 ` Sean Christopherson
2022-02-24 18:45 ` David Matlack
2022-03-04 0:22 ` David Matlack
2022-02-03 1:00 ` [PATCH 03/23] KVM: x86/mmu: Decompose kvm_mmu_get_page() into separate functions David Matlack
2022-02-19 1:25 ` Sean Christopherson
2022-02-24 18:54 ` David Matlack
2022-02-03 1:00 ` [PATCH 04/23] KVM: x86/mmu: Rename shadow MMU functions that deal with shadow pages David Matlack
2022-02-03 1:00 ` David Matlack [this message]
2022-02-03 1:00 ` [PATCH 06/23] KVM: x86/mmu: Separate shadow MMU sp allocation from initialization David Matlack
2022-02-16 19:37 ` Ben Gardon
2022-02-16 21:42 ` David Matlack
2022-02-03 1:00 ` [PATCH 07/23] KVM: x86/mmu: Move huge page split sp allocation code to mmu.c David Matlack
2022-02-03 1:00 ` [PATCH 08/23] KVM: x86/mmu: Use common code to free kvm_mmu_page structs David Matlack
2022-02-03 1:00 ` [PATCH 09/23] KVM: x86/mmu: Use common code to allocate kvm_mmu_page structs from vCPU caches David Matlack
2022-02-03 1:00 ` [PATCH 10/23] KVM: x86/mmu: Pass const memslot to rmap_add() David Matlack
2022-02-23 23:25 ` Ben Gardon
2022-02-03 1:00 ` [PATCH 11/23] KVM: x86/mmu: Pass const memslot to kvm_mmu_init_sp() and descendants David Matlack
2022-02-23 23:27 ` Ben Gardon
2022-02-03 1:00 ` [PATCH 12/23] KVM: x86/mmu: Decouple rmap_add() and link_shadow_page() from kvm_vcpu David Matlack
2022-02-23 23:30 ` Ben Gardon
2022-02-03 1:00 ` [PATCH 13/23] KVM: x86/mmu: Update page stats in __rmap_add() David Matlack
2022-02-23 23:32 ` Ben Gardon
2022-02-23 23:35 ` Ben Gardon
2022-02-03 1:00 ` [PATCH 14/23] KVM: x86/mmu: Cache the access bits of shadowed translations David Matlack
2022-02-28 20:30 ` Ben Gardon
2022-02-03 1:00 ` [PATCH 15/23] KVM: x86/mmu: Pass access information to make_huge_page_split_spte() David Matlack
2022-02-28 20:32 ` Ben Gardon
2022-02-03 1:00 ` [PATCH 16/23] KVM: x86/mmu: Zap collapsible SPTEs at all levels in the shadow MMU David Matlack
2022-02-28 20:39 ` Ben Gardon
2022-03-03 19:42 ` David Matlack
2022-02-03 1:00 ` [PATCH 17/23] KVM: x86/mmu: Pass bool flush parameter to drop_large_spte() David Matlack
2022-02-28 20:47 ` Ben Gardon
2022-03-03 19:52 ` David Matlack
2022-02-03 1:00 ` [PATCH 18/23] KVM: x86/mmu: Extend Eager Page Splitting to the shadow MMU David Matlack
2022-02-28 21:09 ` Ben Gardon
2022-02-28 23:29 ` David Matlack
2022-02-03 1:00 ` [PATCH 19/23] KVM: Allow for different capacities in kvm_mmu_memory_cache structs David Matlack
2022-02-24 11:28 ` Marc Zyngier
2022-02-24 19:20 ` David Matlack
2022-03-04 21:59 ` David Matlack
2022-03-04 22:24 ` David Matlack
2022-03-05 16:55 ` Marc Zyngier
2022-03-07 23:49 ` David Matlack
2022-03-08 7:42 ` Marc Zyngier
2022-03-09 21:49 ` David Matlack
2022-03-10 8:30 ` Marc Zyngier
2022-02-03 1:00 ` [PATCH 20/23] KVM: Allow GFP flags to be passed when topping up MMU caches David Matlack
2022-02-28 21:12 ` Ben Gardon
2022-02-03 1:00 ` [PATCH 21/23] KVM: x86/mmu: Fully split huge pages that require extra pte_list_desc structs David Matlack
2022-02-28 21:22 ` Ben Gardon
2022-02-28 23:41 ` David Matlack
2022-03-01 0:37 ` Ben Gardon
2022-03-03 19:59 ` David Matlack
2022-02-03 1:00 ` [PATCH 22/23] KVM: x86/mmu: Split huge pages aliased by multiple SPTEs David Matlack
2022-02-03 1:00 ` [PATCH 23/23] KVM: selftests: Map x86_64 guest virtual memory with huge pages David Matlack
2022-03-07 5:21 ` [PATCH 00/23] Extend Eager Page Splitting to the shadow MMU Peter Xu
2022-03-07 23:39 ` David Matlack
2022-03-09 7:31 ` Peter Xu
2022-03-09 23:39 ` David Matlack
2022-03-10 7:03 ` Peter Xu
2022-03-10 19:26 ` David Matlack
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220203010051.2813563-6-dmatlack@google.com \
--to=dmatlack@google.com \
--cc=aleksandar.qemu.devel@gmail.com \
--cc=chenhuacai@kernel.org \
--cc=drjones@redhat.com \
--cc=jmattson@google.com \
--cc=joro@8bytes.org \
--cc=kvm@vger.kernel.org \
--cc=maciej.szmigiero@oracle.com \
--cc=maz@kernel.org \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=pfeiner@google.com \
--cc=seanjc@google.com \
--cc=vkuznets@redhat.com \
--cc=wanpengli@tencent.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.