From: David Matlack <dmatlack@google.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: Sean Christopherson <seanjc@google.com>,
kvm@vger.kernel.org, David Matlack <dmatlack@google.com>
Subject: [PATCH] KVM: x86/mmu: Replace tdp_mmu_page with a bit in the role
Date: Thu, 26 Jan 2023 13:04:01 -0800 [thread overview]
Message-ID: <20230126210401.2862537-1-dmatlack@google.com> (raw)
Replace sp->tdp_mmu_page with a bit in the page role. This reduces the
size of struct kvm_mmu_page by a byte.
Note that in tdp_mmu_init_sp() there is no need to explicitly set
sp->role.tdp_mmu=1 for every SP since the role is already copied to all
child SPs.
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Link: https://lore.kernel.org/kvm/b0e8eb55-c2ee-ce13-8806-9d0184678984@redhat.com/
Signed-off-by: David Matlack <dmatlack@google.com>
---
arch/x86/include/asm/kvm_host.h | 11 ++++++++---
arch/x86/kvm/mmu/mmu.c | 1 +
arch/x86/kvm/mmu/mmu_internal.h | 1 -
arch/x86/kvm/mmu/tdp_mmu.c | 1 -
arch/x86/kvm/mmu/tdp_mmu.h | 2 +-
5 files changed, 10 insertions(+), 6 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 4d2bc08794e4..33e5b1c56ff4 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -304,10 +304,10 @@ struct kvm_kernel_irq_routing_entry;
* the number of unique SPs that can theoretically be created is 2^n, where n
* is the number of bits that are used to compute the role.
*
- * But, even though there are 19 bits in the mask below, not all combinations
+ * But, even though there are 20 bits in the mask below, not all combinations
* of modes and flags are possible:
*
- * - invalid shadow pages are not accounted, so the bits are effectively 18
+ * - invalid shadow pages are not accounted, so the bits are effectively 19
*
* - quadrant will only be used if has_4_byte_gpte=1 (non-PAE paging);
* execonly and ad_disabled are only used for nested EPT which has
@@ -321,6 +321,10 @@ struct kvm_kernel_irq_routing_entry;
* - on top of this, smep_andnot_wp and smap_andnot_wp are only set if
* cr0_wp=0, therefore these three bits only give rise to 5 possibilities.
*
+ * - When tdp_mmu=1, only a subset of the remaining bits can vary across
+ * shadow pages: level, invalid, and smm. The rest are fixed, e.g. direct=1,
+ * access=ACC_ALL, guest_mode=0, etc.
+ *
* Therefore, the maximum number of possible upper-level shadow pages for a
* single gfn is a bit less than 2^13.
*/
@@ -340,7 +344,8 @@ union kvm_mmu_page_role {
unsigned ad_disabled:1;
unsigned guest_mode:1;
unsigned passthrough:1;
- unsigned :5;
+ unsigned tdp_mmu:1;
+ unsigned:4;
/*
* This is left at the top of the word so that
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index aeb240b339f5..458a7c398f76 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -5128,6 +5128,7 @@ kvm_calc_tdp_mmu_root_page_role(struct kvm_vcpu *vcpu,
role.level = kvm_mmu_get_tdp_level(vcpu);
role.direct = true;
role.has_4_byte_gpte = false;
+ role.tdp_mmu = tdp_mmu_enabled;
return role;
}
diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
index ac00bfbf32f6..49591f3bd9f9 100644
--- a/arch/x86/kvm/mmu/mmu_internal.h
+++ b/arch/x86/kvm/mmu/mmu_internal.h
@@ -54,7 +54,6 @@ struct kvm_mmu_page {
struct list_head link;
struct hlist_node hash_link;
- bool tdp_mmu_page;
bool unsync;
u8 mmu_valid_gen;
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index bba33aea0fb0..e0eda6332755 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -280,7 +280,6 @@ static void tdp_mmu_init_sp(struct kvm_mmu_page *sp, tdp_ptep_t sptep,
sp->role = role;
sp->gfn = gfn;
sp->ptep = sptep;
- sp->tdp_mmu_page = true;
trace_kvm_mmu_get_page(sp, true);
}
diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h
index 0a63b1afabd3..cf7321635ff1 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.h
+++ b/arch/x86/kvm/mmu/tdp_mmu.h
@@ -71,7 +71,7 @@ u64 *kvm_tdp_mmu_fast_pf_get_last_sptep(struct kvm_vcpu *vcpu, u64 addr,
u64 *spte);
#ifdef CONFIG_X86_64
-static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) { return sp->tdp_mmu_page; }
+static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) { return sp->role.tdp_mmu; }
#else
static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) { return false; }
#endif
base-commit: de60733246ff4545a0483140c1f21426b8d7cb7f
prerequisite-patch-id: 42a76ce7cec240776c21f674e99e893a3a6bee58
--
2.39.1.456.gfc5497dd1b-goog
next reply other threads:[~2023-01-26 21:04 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-01-26 21:04 David Matlack [this message]
2023-01-26 22:38 ` [PATCH] KVM: x86/mmu: Replace tdp_mmu_page with a bit in the role Sean Christopherson
2023-01-27 0:14 ` David Matlack
2023-01-27 1:53 ` Sean Christopherson
2023-01-27 3:49 ` David Matlack
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230126210401.2862537-1-dmatlack@google.com \
--to=dmatlack@google.com \
--cc=kvm@vger.kernel.org \
--cc=pbonzini@redhat.com \
--cc=seanjc@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox