From: David Matlack <dmatlack@google.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: Marc Zyngier <maz@kernel.org>,
Huacai Chen <chenhuacai@kernel.org>,
Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
Anup Patel <anup@brainfault.org>,
Paul Walmsley <paul.walmsley@sifive.com>,
Palmer Dabbelt <palmer@dabbelt.com>,
Albert Ou <aou@eecs.berkeley.edu>,
Sean Christopherson <seanjc@google.com>,
Andrew Jones <drjones@redhat.com>,
Ben Gardon <bgardon@google.com>, Peter Xu <peterx@redhat.com>,
maciej.szmigiero@oracle.com,
"moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)"
<kvmarm@lists.cs.columbia.edu>,
"open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)"
<linux-mips@vger.kernel.org>,
"open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)"
<kvm@vger.kernel.org>,
"open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)"
<kvm-riscv@lists.infradead.org>,
Peter Feiner <pfeiner@google.com>,
Lai Jiangshan <jiangshanlai@gmail.com>,
David Matlack <dmatlack@google.com>
Subject: [PATCH v6 10/22] KVM: x86/mmu: Pass memory caches to allocate SPs separately
Date: Mon, 16 May 2022 23:21:26 +0000 [thread overview]
Message-ID: <20220516232138.1783324-11-dmatlack@google.com> (raw)
In-Reply-To: <20220516232138.1783324-1-dmatlack@google.com>
Refactor kvm_mmu_alloc_shadow_page() to receive the caches from which it
will allocate the various pieces of memory for shadow pages as a
parameter, rather than deriving them from the vcpu pointer. This will be
useful in a future commit where shadow pages are allocated during VM
ioctls for eager page splitting, and thus will use a different set of
caches.
Preemptively pull the caches out all the way to
kvm_mmu_get_shadow_page() since eager page splitting will not be calling
kvm_mmu_alloc_shadow_page() directly.
No functional change intended.
Signed-off-by: David Matlack <dmatlack@google.com>
---
arch/x86/kvm/mmu/mmu.c | 36 +++++++++++++++++++++++++++++-------
1 file changed, 29 insertions(+), 7 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 6a3b1b00f02b..bad4dd5aa051 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -2075,17 +2075,25 @@ static struct kvm_mmu_page *kvm_mmu_find_shadow_page(struct kvm_vcpu *vcpu,
return sp;
}
+/* Caches used when allocating a new shadow page. */
+struct shadow_page_caches {
+ struct kvm_mmu_memory_cache *page_header_cache;
+ struct kvm_mmu_memory_cache *shadow_page_cache;
+ struct kvm_mmu_memory_cache *gfn_array_cache;
+};
+
static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm_vcpu *vcpu,
+ struct shadow_page_caches *caches,
gfn_t gfn,
struct hlist_head *sp_list,
union kvm_mmu_page_role role)
{
struct kvm_mmu_page *sp;
- sp = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_page_header_cache);
- sp->spt = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_shadow_page_cache);
+ sp = kvm_mmu_memory_cache_alloc(caches->page_header_cache);
+ sp->spt = kvm_mmu_memory_cache_alloc(caches->shadow_page_cache);
if (!role.direct)
- sp->gfns = kvm_mmu_memory_cache_alloc(&vcpu->arch.mmu_gfn_array_cache);
+ sp->gfns = kvm_mmu_memory_cache_alloc(caches->gfn_array_cache);
set_page_private(virt_to_page(sp->spt), (unsigned long)sp);
@@ -2107,9 +2115,10 @@ static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm_vcpu *vcpu,
return sp;
}
-static struct kvm_mmu_page *kvm_mmu_get_shadow_page(struct kvm_vcpu *vcpu,
- gfn_t gfn,
- union kvm_mmu_page_role role)
+static struct kvm_mmu_page *__kvm_mmu_get_shadow_page(struct kvm_vcpu *vcpu,
+ struct shadow_page_caches *caches,
+ gfn_t gfn,
+ union kvm_mmu_page_role role)
{
struct hlist_head *sp_list;
struct kvm_mmu_page *sp;
@@ -2120,13 +2129,26 @@ static struct kvm_mmu_page *kvm_mmu_get_shadow_page(struct kvm_vcpu *vcpu,
sp = kvm_mmu_find_shadow_page(vcpu, gfn, sp_list, role);
if (!sp) {
created = true;
- sp = kvm_mmu_alloc_shadow_page(vcpu, gfn, sp_list, role);
+ sp = kvm_mmu_alloc_shadow_page(vcpu, caches, gfn, sp_list, role);
}
trace_kvm_mmu_get_page(sp, created);
return sp;
}
+static struct kvm_mmu_page *kvm_mmu_get_shadow_page(struct kvm_vcpu *vcpu,
+ gfn_t gfn,
+ union kvm_mmu_page_role role)
+{
+ struct shadow_page_caches caches = {
+ .page_header_cache = &vcpu->arch.mmu_page_header_cache,
+ .shadow_page_cache = &vcpu->arch.mmu_shadow_page_cache,
+ .gfn_array_cache = &vcpu->arch.mmu_gfn_array_cache,
+ };
+
+ return __kvm_mmu_get_shadow_page(vcpu, &caches, gfn, role);
+}
+
static union kvm_mmu_page_role kvm_mmu_child_role(u64 *sptep, bool direct, u32 access)
{
struct kvm_mmu_page *parent_sp = sptep_to_sp(sptep);
--
2.36.0.550.gb090851708-goog
next prev parent reply other threads:[~2022-05-16 23:22 UTC|newest]
Thread overview: 56+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-05-16 23:21 [PATCH v6 00/22] KVM: Extend Eager Page Splitting to the shadow MMU David Matlack
2022-05-16 23:21 ` [PATCH v6 01/22] KVM: x86/mmu: Optimize MMU page cache lookup for all direct SPs David Matlack
2022-05-16 23:21 ` [PATCH v6 02/22] KVM: x86/mmu: Use a bool for direct David Matlack
2022-05-16 23:21 ` [PATCH v6 03/22] KVM: x86/mmu: Stop passing @direct to mmu_alloc_root() David Matlack
2022-06-16 18:47 ` Sean Christopherson
2022-06-22 14:06 ` Paolo Bonzini
2022-06-22 14:19 ` Sean Christopherson
2022-05-16 23:21 ` [PATCH v6 04/22] KVM: x86/mmu: Derive shadow MMU page role from parent David Matlack
2022-06-17 1:19 ` Sean Christopherson
2022-06-17 15:12 ` Sean Christopherson
2022-05-16 23:21 ` [PATCH v6 05/22] KVM: x86/mmu: Always pass 0 for @quadrant when gptes are 8 bytes David Matlack
2022-06-17 15:20 ` Sean Christopherson
2022-05-16 23:21 ` [PATCH v6 06/22] KVM: x86/mmu: Decompose kvm_mmu_get_page() into separate functions David Matlack
2022-05-16 23:21 ` [PATCH v6 07/22] KVM: x86/mmu: Consolidate shadow page allocation and initialization David Matlack
2022-05-16 23:21 ` [PATCH v6 08/22] KVM: x86/mmu: Rename shadow MMU functions that deal with shadow pages David Matlack
2022-05-16 23:21 ` [PATCH v6 09/22] KVM: x86/mmu: Move guest PT write-protection to account_shadowed() David Matlack
2022-05-16 23:21 ` David Matlack [this message]
2022-06-17 15:01 ` [PATCH v6 10/22] KVM: x86/mmu: Pass memory caches to allocate SPs separately Sean Christopherson
2022-06-21 17:06 ` David Matlack
2022-06-21 17:27 ` Sean Christopherson
2022-05-16 23:21 ` [PATCH v6 11/22] KVM: x86/mmu: Replace vcpu with kvm in kvm_mmu_alloc_shadow_page() David Matlack
2022-05-16 23:21 ` [PATCH v6 12/22] KVM: x86/mmu: Pass kvm pointer separately from vcpu to kvm_mmu_find_shadow_page() David Matlack
2022-05-16 23:21 ` [PATCH v6 13/22] KVM: x86/mmu: Allow NULL @vcpu in kvm_mmu_find_shadow_page() David Matlack
2022-06-17 15:28 ` Sean Christopherson
2022-06-22 14:26 ` Paolo Bonzini
2022-05-16 23:21 ` [PATCH v6 14/22] KVM: x86/mmu: Pass const memslot to rmap_add() David Matlack
2022-06-17 15:30 ` Sean Christopherson
2022-05-16 23:21 ` [PATCH v6 15/22] KVM: x86/mmu: Decouple rmap_add() and link_shadow_page() from kvm_vcpu David Matlack
2022-06-17 16:39 ` Sean Christopherson
2022-05-16 23:21 ` [PATCH v6 16/22] KVM: x86/mmu: Update page stats in __rmap_add() David Matlack
2022-06-17 16:40 ` Sean Christopherson
2022-05-16 23:21 ` [PATCH v6 17/22] KVM: x86/mmu: Cache the access bits of shadowed translations David Matlack
2022-06-17 16:53 ` Sean Christopherson
2022-05-16 23:21 ` [PATCH v6 18/22] KVM: x86/mmu: Extend make_huge_page_split_spte() for the shadow MMU David Matlack
2022-06-17 16:56 ` Sean Christopherson
2022-05-16 23:21 ` [PATCH v6 19/22] KVM: x86/mmu: Zap collapsible SPTEs in shadow MMU at all possible levels David Matlack
2022-06-17 17:01 ` Sean Christopherson
2022-06-21 17:24 ` David Matlack
2022-06-21 17:59 ` Sean Christopherson
2022-05-16 23:21 ` [PATCH v6 20/22] KVM: x86/mmu: Refactor drop_large_spte() David Matlack
2022-06-17 17:11 ` Sean Christopherson
2022-06-22 16:13 ` Paolo Bonzini
2022-06-22 16:50 ` Paolo Bonzini
2022-05-16 23:21 ` [PATCH v6 21/22] KVM: Allow for different capacities in kvm_mmu_memory_cache structs David Matlack
2022-05-19 15:33 ` Anup Patel
2022-05-20 23:21 ` Mingwei Zhang
2022-05-23 17:37 ` Sean Christopherson
2022-05-23 17:44 ` David Matlack
2022-05-23 18:13 ` Mingwei Zhang
2022-05-23 18:22 ` David Matlack
2022-05-23 23:53 ` David Matlack
2022-06-17 17:41 ` Sean Christopherson
2022-06-17 18:34 ` Sean Christopherson
2022-05-16 23:21 ` [PATCH v6 22/22] KVM: x86/mmu: Extend Eager Page Splitting to nested MMUs David Matlack
2022-06-01 21:50 ` Ricardo Koller
2022-06-17 19:08 ` Sean Christopherson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220516232138.1783324-11-dmatlack@google.com \
--to=dmatlack@google.com \
--cc=aleksandar.qemu.devel@gmail.com \
--cc=anup@brainfault.org \
--cc=aou@eecs.berkeley.edu \
--cc=bgardon@google.com \
--cc=chenhuacai@kernel.org \
--cc=drjones@redhat.com \
--cc=jiangshanlai@gmail.com \
--cc=kvm-riscv@lists.infradead.org \
--cc=kvm@vger.kernel.org \
--cc=kvmarm@lists.cs.columbia.edu \
--cc=linux-mips@vger.kernel.org \
--cc=maciej.szmigiero@oracle.com \
--cc=maz@kernel.org \
--cc=palmer@dabbelt.com \
--cc=paul.walmsley@sifive.com \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=pfeiner@google.com \
--cc=seanjc@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).