From: David Matlack <dmatlack@google.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: Marc Zyngier <maz@kernel.org>,
Huacai Chen <chenhuacai@kernel.org>,
Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
Anup Patel <anup@brainfault.org>,
Paul Walmsley <paul.walmsley@sifive.com>,
Palmer Dabbelt <palmer@dabbelt.com>,
Albert Ou <aou@eecs.berkeley.edu>,
Sean Christopherson <seanjc@google.com>,
Andrew Jones <drjones@redhat.com>,
Ben Gardon <bgardon@google.com>, Peter Xu <peterx@redhat.com>,
maciej.szmigiero@oracle.com,
"moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)"
<kvmarm@lists.cs.columbia.edu>,
"open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)"
<linux-mips@vger.kernel.org>,
"open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)"
<kvm@vger.kernel.org>,
"open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)"
<kvm-riscv@lists.infradead.org>,
Peter Feiner <pfeiner@google.com>,
David Matlack <dmatlack@google.com>
Subject: [PATCH v2 24/26] KVM: x86/mmu: Split huge pages aliased by multiple SPTEs
Date: Fri, 11 Mar 2022 00:25:26 +0000 [thread overview]
Message-ID: <20220311002528.2230172-25-dmatlack@google.com> (raw)
In-Reply-To: <20220311002528.2230172-1-dmatlack@google.com>
The existing huge page splitting code bails if it encounters a huge page
that is aliased by another SPTE that has already been split (either due
to NX huge pages or eager page splitting). Extend the huge page
splitting code to also handle such aliases.
The thing we have to be careful about is dealing with what's already in
the lower level page table. If eager page splitting was the only
operation that split huge pages, this would be fine. However huge pages
can also be split by NX huge pages. This means the lower level page
table may only be partially filled in and may point to even lower level
page tables that are partially filled in. We can fill in the rest of the
page table but dealing with the lower level page tables would be too
complex.
To handle this we flush TLBs after dropping the huge SPTE whenever we
are about to install a lower level page table that was partially filled
in (*). We can skip the TLB flush if the lower level page table was
empty (no aliasing) or identical to what we were already going to
populate it with (aliased huge page that was just eagerly split).
(*) This TLB flush could probably be delayed until we're about to drop
the MMU lock, which would also let us batch flushes for multiple splits.
However such scenarios should be rare in practice (a huge page must be
aliased in multiple SPTEs and have been split for NX Huge Pages in only
some of them). Flushing immediately is simpler to plumb and also reduces
the chances of tripping over a CPU bug (e.g. see iTLB multi-hit).
Signed-off-by: David Matlack <dmatlack@google.com>
---
arch/x86/include/asm/kvm_host.h | 5 ++-
arch/x86/kvm/mmu/mmu.c | 73 +++++++++++++++------------------
2 files changed, 36 insertions(+), 42 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 00a5c0bcc2eb..275d00528805 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1245,9 +1245,10 @@ struct kvm_arch {
* Memory cache used to allocate pte_list_desc structs while splitting
* huge pages. In the worst case, to split one huge page we need 512
* pte_list_desc structs to add each new lower level leaf sptep to the
- * memslot rmap.
+ * memslot rmap plus 1 to extend the parent_ptes rmap of the new lower
+ * level page table.
*/
-#define HUGE_PAGE_SPLIT_DESC_CACHE_CAPACITY 512
+#define HUGE_PAGE_SPLIT_DESC_CACHE_CAPACITY 513
__DEFINE_KVM_MMU_MEMORY_CACHE(huge_page_split_desc_cache,
HUGE_PAGE_SPLIT_DESC_CACHE_CAPACITY);
};
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 95b8e2ef562f..68785b422a08 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -6208,6 +6208,7 @@ static struct kvm_mmu_page *kvm_mmu_get_sp_for_split(struct kvm *kvm,
{
struct kvm_mmu_page *split_sp;
union kvm_mmu_page_role role;
+ bool created = false;
unsigned int access;
gfn_t gfn;
@@ -6220,25 +6221,21 @@ static struct kvm_mmu_page *kvm_mmu_get_sp_for_split(struct kvm *kvm,
*/
role = kvm_mmu_child_role(huge_sptep, true, access);
split_sp = kvm_mmu_find_direct_sp(kvm, gfn, role);
-
- /*
- * Opt not to split if the lower-level SP already exists. This requires
- * more complex handling as the SP may be already partially filled in
- * and may need extra pte_list_desc structs to update parent_ptes.
- */
if (split_sp)
- return NULL;
+ goto out;
+ created = true;
swap(split_sp, *spp);
init_shadow_page(kvm, split_sp, slot, gfn, role);
- trace_kvm_mmu_get_page(split_sp, true);
+out:
+ trace_kvm_mmu_get_page(split_sp, created);
return split_sp;
}
-static int kvm_mmu_split_huge_page(struct kvm *kvm,
- const struct kvm_memory_slot *slot,
- u64 *huge_sptep, struct kvm_mmu_page **spp)
+static void kvm_mmu_split_huge_page(struct kvm *kvm,
+ const struct kvm_memory_slot *slot,
+ u64 *huge_sptep, struct kvm_mmu_page **spp)
{
struct kvm_mmu_memory_cache *cache;
@@ -6246,22 +6243,11 @@ static int kvm_mmu_split_huge_page(struct kvm *kvm,
u64 huge_spte, split_spte;
int split_level, index;
unsigned int access;
+ bool flush = false;
u64 *split_sptep;
gfn_t split_gfn;
split_sp = kvm_mmu_get_sp_for_split(kvm, slot, huge_sptep, spp);
- if (!split_sp)
- return -EOPNOTSUPP;
-
- /*
- * We did not allocate an extra pte_list_desc struct to add huge_sptep
- * to split_sp->parent_ptes. An extra pte_list_desc struct should never
- * be necessary in practice though since split_sp is brand new.
- *
- * Note, this makes it safe to pass NULL to __link_shadow_page() below.
- */
- if (WARN_ON_ONCE(split_sp->parent_ptes.val))
- return -EINVAL;
huge_spte = READ_ONCE(*huge_sptep);
@@ -6273,7 +6259,20 @@ static int kvm_mmu_split_huge_page(struct kvm *kvm,
split_sptep = &split_sp->spt[index];
split_gfn = kvm_mmu_page_get_gfn(split_sp, index);
- BUG_ON(is_shadow_present_pte(*split_sptep));
+ /*
+ * split_sp may have populated page table entries if this huge
+ * page is aliased in multiple shadow page table entries. We
+ * know the existing SP will be mapping the same GFN->PFN
+ * translation since this is a direct SP. However, the SPTE may
+ * point to an even lower level page table that may only be
+ * partially filled in (e.g. for NX huge pages). In other words,
+ * we may be unmapping a portion of the huge page, which
+ * requires a TLB flush.
+ */
+ if (is_shadow_present_pte(*split_sptep)) {
+ flush |= !is_last_spte(*split_sptep, split_level);
+ continue;
+ }
split_spte = make_huge_page_split_spte(
huge_spte, split_level + 1, index, access);
@@ -6284,15 +6283,12 @@ static int kvm_mmu_split_huge_page(struct kvm *kvm,
/*
* Replace the huge spte with a pointer to the populated lower level
- * page table. Since we are making this change without a TLB flush vCPUs
- * will see a mix of the split mappings and the original huge mapping,
- * depending on what's currently in their TLB. This is fine from a
- * correctness standpoint since the translation will be identical.
+ * page table. If the lower-level page table indentically maps the huge
+ * page, there's no need for a TLB flush. Otherwise, flush TLBs after
+ * dropping the huge page and before installing the shadow page table.
*/
- __drop_large_spte(kvm, huge_sptep, false);
- __link_shadow_page(NULL, huge_sptep, split_sp);
-
- return 0;
+ __drop_large_spte(kvm, huge_sptep, flush);
+ __link_shadow_page(cache, huge_sptep, split_sp);
}
static bool should_split_huge_page(u64 *huge_sptep)
@@ -6347,16 +6343,13 @@ static bool rmap_try_split_huge_pages(struct kvm *kvm,
if (dropped_lock)
goto restart;
- r = kvm_mmu_split_huge_page(kvm, slot, huge_sptep, &sp);
-
- trace_kvm_mmu_split_huge_page(gfn, spte, level, r);
-
/*
- * If splitting is successful we must restart the iterator
- * because huge_sptep has just been removed from it.
+ * After splitting we must restart the iterator because
+ * huge_sptep has just been removed from it.
*/
- if (!r)
- goto restart;
+ kvm_mmu_split_huge_page(kvm, slot, huge_sptep, &sp);
+ trace_kvm_mmu_split_huge_page(gfn, spte, level, 0);
+ goto restart;
}
if (sp)
--
2.35.1.723.g4982287a31-goog
next prev parent reply other threads:[~2022-03-11 0:26 UTC|newest]
Thread overview: 67+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-03-11 0:25 [PATCH v2 00/26] Extend Eager Page Splitting to the shadow MMU David Matlack
2022-03-11 0:25 ` [PATCH v2 01/26] KVM: x86/mmu: Optimize MMU page cache lookup for all direct SPs David Matlack
2022-03-15 7:40 ` Peter Xu
2022-03-22 18:16 ` David Matlack
2022-03-11 0:25 ` [PATCH v2 02/26] KVM: x86/mmu: Use a bool for direct David Matlack
2022-03-15 7:46 ` Peter Xu
2022-03-22 18:21 ` David Matlack
2022-03-11 0:25 ` [PATCH v2 03/26] KVM: x86/mmu: Derive shadow MMU page role from parent David Matlack
2022-03-15 8:15 ` Peter Xu
2022-03-22 18:30 ` David Matlack
2022-03-30 14:25 ` Peter Xu
2022-03-11 0:25 ` [PATCH v2 04/26] KVM: x86/mmu: Decompose kvm_mmu_get_page() into separate functions David Matlack
2022-03-15 8:50 ` Peter Xu
2022-03-22 22:09 ` David Matlack
2022-03-11 0:25 ` [PATCH v2 05/26] KVM: x86/mmu: Rename shadow MMU functions that deal with shadow pages David Matlack
2022-03-15 8:52 ` Peter Xu
2022-03-22 21:35 ` David Matlack
2022-03-30 14:28 ` Peter Xu
2022-03-11 0:25 ` [PATCH v2 06/26] KVM: x86/mmu: Pass memslot to kvm_mmu_new_shadow_page() David Matlack
2022-03-15 9:03 ` Peter Xu
2022-03-22 22:05 ` David Matlack
2022-03-11 0:25 ` [PATCH v2 07/26] KVM: x86/mmu: Separate shadow MMU sp allocation from initialization David Matlack
2022-03-15 9:54 ` Peter Xu
2022-03-11 0:25 ` [PATCH v2 08/26] KVM: x86/mmu: Link spt to sp during allocation David Matlack
2022-03-15 10:04 ` Peter Xu
2022-03-22 22:30 ` David Matlack
2022-03-11 0:25 ` [PATCH v2 09/26] KVM: x86/mmu: Move huge page split sp allocation code to mmu.c David Matlack
2022-03-15 10:17 ` Peter Xu
2022-03-11 0:25 ` [PATCH v2 10/26] KVM: x86/mmu: Use common code to free kvm_mmu_page structs David Matlack
2022-03-15 10:22 ` Peter Xu
2022-03-22 22:33 ` David Matlack
2022-03-11 0:25 ` [PATCH v2 11/26] KVM: x86/mmu: Use common code to allocate kvm_mmu_page structs from vCPU caches David Matlack
2022-03-15 10:27 ` Peter Xu
2022-03-22 22:35 ` David Matlack
2022-03-11 0:25 ` [PATCH v2 12/26] KVM: x86/mmu: Pass const memslot to rmap_add() David Matlack
2022-03-11 0:25 ` [PATCH v2 13/26] KVM: x86/mmu: Pass const memslot to init_shadow_page() and descendants David Matlack
2022-03-11 0:25 ` [PATCH v2 14/26] KVM: x86/mmu: Decouple rmap_add() and link_shadow_page() from kvm_vcpu David Matlack
2022-03-15 10:37 ` Peter Xu
2022-03-11 0:25 ` [PATCH v2 15/26] KVM: x86/mmu: Update page stats in __rmap_add() David Matlack
2022-03-15 10:39 ` Peter Xu
2022-03-11 0:25 ` [PATCH v2 16/26] KVM: x86/mmu: Cache the access bits of shadowed translations David Matlack
2022-03-16 8:32 ` Peter Xu
2022-03-22 22:51 ` David Matlack
2022-03-30 18:30 ` Peter Xu
2022-03-31 21:40 ` David Matlack
2022-03-11 0:25 ` [PATCH v2 17/26] KVM: x86/mmu: Pass access information to make_huge_page_split_spte() David Matlack
2022-03-16 8:44 ` Peter Xu
2022-03-22 23:08 ` David Matlack
2022-03-11 0:25 ` [PATCH v2 18/26] KVM: x86/mmu: Zap collapsible SPTEs at all levels in the shadow MMU David Matlack
2022-03-16 8:49 ` Peter Xu
2022-03-22 23:11 ` David Matlack
2022-03-11 0:25 ` [PATCH v2 19/26] KVM: x86/mmu: Refactor drop_large_spte() David Matlack
2022-03-16 8:53 ` Peter Xu
2022-03-11 0:25 ` [PATCH v2 20/26] KVM: x86/mmu: Extend Eager Page Splitting to the shadow MMU David Matlack
2022-03-16 10:26 ` Peter Xu
2022-03-22 0:07 ` David Matlack
2022-03-22 23:58 ` David Matlack
2022-03-30 18:34 ` Peter Xu
2022-03-31 19:57 ` David Matlack
2022-03-11 0:25 ` [PATCH v2 21/26] KVM: Allow for different capacities in kvm_mmu_memory_cache structs David Matlack
2022-03-19 5:27 ` Anup Patel
2022-03-22 23:13 ` David Matlack
2022-03-11 0:25 ` [PATCH v2 22/26] KVM: Allow GFP flags to be passed when topping up MMU caches David Matlack
2022-03-11 0:25 ` [PATCH v2 23/26] KVM: x86/mmu: Fully split huge pages that require extra pte_list_desc structs David Matlack
2022-03-11 0:25 ` David Matlack [this message]
2022-03-11 0:25 ` [PATCH v2 25/26] KVM: x86/mmu: Drop NULL pte_list_desc_cache fallback David Matlack
2022-03-11 0:25 ` [PATCH v2 26/26] KVM: selftests: Map x86_64 guest virtual memory with huge pages David Matlack
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220311002528.2230172-25-dmatlack@google.com \
--to=dmatlack@google.com \
--cc=aleksandar.qemu.devel@gmail.com \
--cc=anup@brainfault.org \
--cc=aou@eecs.berkeley.edu \
--cc=bgardon@google.com \
--cc=chenhuacai@kernel.org \
--cc=drjones@redhat.com \
--cc=kvm-riscv@lists.infradead.org \
--cc=kvm@vger.kernel.org \
--cc=kvmarm@lists.cs.columbia.edu \
--cc=linux-mips@vger.kernel.org \
--cc=maciej.szmigiero@oracle.com \
--cc=maz@kernel.org \
--cc=palmer@dabbelt.com \
--cc=paul.walmsley@sifive.com \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=pfeiner@google.com \
--cc=seanjc@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox